2024-08-07T17:48:11.3322609Z Current runner version: '2.318.0' 2024-08-07T17:48:11.3333294Z Runner name: 'i-07832b6703dca2070' 2024-08-07T17:48:11.3334601Z Runner group name: 'Default' 2024-08-07T17:48:11.3335903Z Machine name: 'ip-10-0-62-73' 2024-08-07T17:48:11.3342424Z ##[group]GITHUB_TOKEN Permissions 2024-08-07T17:48:11.3346066Z Actions: read 2024-08-07T17:48:11.3346946Z Attestations: read 2024-08-07T17:48:11.3347913Z Checks: read 2024-08-07T17:48:11.3348881Z Contents: read 2024-08-07T17:48:11.3350112Z Deployments: read 2024-08-07T17:48:11.3351151Z Discussions: read 2024-08-07T17:48:11.3352073Z Issues: read 2024-08-07T17:48:11.3352876Z Metadata: read 2024-08-07T17:48:11.3353786Z Packages: read 2024-08-07T17:48:11.3354691Z Pages: read 2024-08-07T17:48:11.3355472Z PullRequests: read 2024-08-07T17:48:11.3356428Z RepositoryProjects: read 2024-08-07T17:48:11.3357429Z SecurityEvents: read 2024-08-07T17:48:11.3358272Z Statuses: read 2024-08-07T17:48:11.3359170Z ##[endgroup] 2024-08-07T17:48:11.3363927Z Secret source: Actions 2024-08-07T17:48:11.3365403Z Prepare workflow directory 2024-08-07T17:48:11.8040142Z Prepare all required actions 2024-08-07T17:48:11.8100516Z Getting action download info 2024-08-07T17:48:11.9969938Z Download action repository 'pytorch/test-infra@main' (SHA:a1f5a89251fc4258ab59806094fe3108f7d6741a) 2024-08-07T17:48:13.7093594Z Download action repository 'pytorch/pytorch@main' (SHA:a62710c82039b798befd86e938d2137af3978c93) 2024-08-07T17:48:26.6606453Z Download action repository 'aws-actions/configure-aws-credentials@v3' (SHA:50ac8dd1e1b10d09dac7b8727528b91bed831ac0) 2024-08-07T17:48:26.8581263Z Download action repository 'seemethere/upload-artifact-s3@v5' (SHA:baba72d0712b404f646cebe0730933554ebce96a) 2024-08-07T17:48:27.1639400Z Getting action download info 2024-08-07T17:48:27.2798750Z Download action repository 'malfet/checkout@silent-checkout' (SHA:e07af140b3ccefc05679e3755b9db68f4ee4589c) 2024-08-07T17:48:27.6023268Z Getting action download info 2024-08-07T17:48:27.6848815Z Download action repository 'nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482' (SHA:3e91a01664abd3c5cd539100d10d33b9c5b68482) 2024-08-07T17:48:27.8642303Z Uses: pytorch/pytorch/.github/workflows/_linux-test.yml@refs/pull/131248/merge (f779f6b7738020e244184bded4026b37de3f9f24) 2024-08-07T17:48:27.8645701Z ##[group] Inputs 2024-08-07T17:48:27.8646212Z build-environment: linux-focal-cuda12.1-py3.10-gcc9 2024-08-07T17:48:27.8648595Z test-matrix: {"include": [{"config": "default", "shard": 1, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 2, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 3, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 4, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 5, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}]} 2024-08-07T17:48:27.8651344Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:48:27.8652347Z sync-tag: 2024-08-07T17:48:27.8653564Z timeout-minutes: 360 2024-08-07T17:48:27.8653955Z use-gha: 2024-08-07T17:48:27.8654269Z dashboard-tag: 2024-08-07T17:48:27.8654614Z s3-bucket: gha-artifacts 2024-08-07T17:48:27.8654989Z aws-role-to-assume: 2024-08-07T17:48:27.8655340Z ##[endgroup] 2024-08-07T17:48:27.8655970Z Complete job name: linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, amz2023.linux.4xlarge.nvidia.gpu) 2024-08-07T17:48:27.9404469Z A job started hook has been configured by the self-hosted runner administrator 2024-08-07T17:48:27.9563494Z ##[group]Run '/home/ec2-user/runner-scripts/before_job.sh' 2024-08-07T17:48:27.9575185Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:48:27.9576208Z ##[endgroup] 2024-08-07T17:48:29.9472310Z Runner Type: amz2023.linux.4xlarge.nvidia.gpu 2024-08-07T17:48:29.9473590Z Instance Type: g3.4xlarge 2024-08-07T17:48:29.9475144Z AMI Name: al2023-ami-2023.5.20240701.0-kernel-6.1-x86_64 2024-08-07T17:48:29.9476119Z AMI ID: ami-06c68f701d8090592 2024-08-07T17:48:37.5587351Z ##[group]Run pytorch/test-infra/.github/actions/setup-ssh@main 2024-08-07T17:48:37.5587940Z with: 2024-08-07T17:48:37.5588841Z github-secret: *** 2024-08-07T17:48:37.5589790Z instructions: All testing is done inside the container, to start an interactive session run: docker exec -it $(docker container ps --format '{{.ID}}') bash 2024-08-07T17:48:37.5590709Z activate-with-label: false 2024-08-07T17:48:37.5591084Z label: with-ssh 2024-08-07T17:48:37.5591425Z remove-existing-keys: true 2024-08-07T17:48:37.5591781Z fail-silently: true 2024-08-07T17:48:37.5592167Z env: 2024-08-07T17:48:37.5592463Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:48:37.5592801Z ##[endgroup] 2024-08-07T17:48:37.6780890Z Please see https://github.com/pytorch/pytorch/wiki/Debugging-using-with-ssh-for-Github-Actions for more info. 2024-08-07T17:48:38.0094168Z Grabbing public ssh keys from https://github.com/zdevito.keys 2024-08-07T17:48:38.0930834Z ~/.ssh/authorized_keys file found on node, removing ~/.ssh and starting fresh 2024-08-07T17:48:38.0954837Z Public keys pulled and installed to /home/ec2-user/.ssh/authorized_keys 2024-08-07T17:48:38.1000740Z Login using: ssh ec2-user@ec2-54-147-63-227.compute-1.amazonaws.com 2024-08-07T17:48:38.1002107Z All testing is done inside the container, to start an interactive session run: 2024-08-07T17:48:38.1003092Z docker exec -it $(docker container ps --format '{{.ID}}') bash 2024-08-07T17:48:38.1149103Z ##[group]Run pytorch/pytorch/.github/actions/checkout-pytorch@main 2024-08-07T17:48:38.1149756Z with: 2024-08-07T17:48:38.1150056Z submodules: recursive 2024-08-07T17:48:38.1150446Z fetch-depth: 0 2024-08-07T17:48:38.1150769Z env: 2024-08-07T17:48:38.1151054Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:48:38.1151433Z ##[endgroup] 2024-08-07T17:48:38.1262377Z ##[group]Run retry () { 2024-08-07T17:48:38.1262802Z retry () { 2024-08-07T17:48:38.1263266Z  $* || (sleep 1 && $*) || (sleep 2 && $*) || (sleep 4 && $*) || (sleep 8 && $*) 2024-08-07T17:48:38.1263770Z } 2024-08-07T17:48:38.1264090Z echo "${GITHUB_WORKSPACE}" 2024-08-07T17:48:38.1264487Z if [ -z "${NO_SUDO}" ]; then 2024-08-07T17:48:38.1264942Z  retry sudo rm -rf "${GITHUB_WORKSPACE}" 2024-08-07T17:48:38.1265375Z else 2024-08-07T17:48:38.1265702Z  retry rm -rf "${GITHUB_WORKSPACE}" 2024-08-07T17:48:38.1266113Z fi 2024-08-07T17:48:38.1266491Z mkdir "${GITHUB_WORKSPACE}" 2024-08-07T17:48:38.1274144Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:48:38.1274636Z env: 2024-08-07T17:48:38.1274914Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:48:38.1275268Z NO_SUDO: 2024-08-07T17:48:38.1275559Z ##[endgroup] 2024-08-07T17:48:38.1307510Z /home/ec2-user/actions-runner/_work/pytorch/pytorch 2024-08-07T17:48:41.6975004Z ##[group]Run malfet/checkout@silent-checkout 2024-08-07T17:48:41.6975481Z with: 2024-08-07T17:48:41.6975790Z ref: 016588f53c6904b840aa56aa86f95460b4d9c996 2024-08-07T17:48:41.6976218Z fetch-depth: 0 2024-08-07T17:48:41.6976538Z submodules: recursive 2024-08-07T17:48:41.6976856Z quiet-checkout: true 2024-08-07T17:48:41.6977203Z repository: pytorch/pytorch 2024-08-07T17:48:41.6977720Z token: *** 2024-08-07T17:48:41.6978022Z ssh-strict: true 2024-08-07T17:48:41.6978327Z persist-credentials: true 2024-08-07T17:48:41.6978680Z clean: true 2024-08-07T17:48:41.6979005Z sparse-checkout-cone-mode: true 2024-08-07T17:48:41.6979387Z lfs: false 2024-08-07T17:48:41.6979689Z set-safe-directory: true 2024-08-07T17:48:41.6980012Z env: 2024-08-07T17:48:41.6980294Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:48:41.6980636Z ##[endgroup] 2024-08-07T17:48:41.8323667Z Syncing repository: pytorch/pytorch 2024-08-07T17:48:41.8325571Z ##[group]Getting Git version info 2024-08-07T17:48:41.8326200Z Working directory is '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2024-08-07T17:48:41.8327560Z [command]/usr/bin/git version 2024-08-07T17:48:41.8327932Z git version 2.40.1 2024-08-07T17:48:41.8351845Z ##[endgroup] 2024-08-07T17:48:41.8373951Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/a32cb8e7-8ac3-4838-b4ce-e172891cbcc0' before making global git config changes 2024-08-07T17:48:41.8375152Z Adding repository directory to the temporary git global config as a safe directory 2024-08-07T17:48:41.8380615Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2024-08-07T17:48:41.8431284Z Deleting the contents of '/home/ec2-user/actions-runner/_work/pytorch/pytorch' 2024-08-07T17:48:41.8437420Z ##[group]Initializing the repository 2024-08-07T17:48:41.8441471Z [command]/usr/bin/git init /home/ec2-user/actions-runner/_work/pytorch/pytorch 2024-08-07T17:48:41.8479960Z hint: Using 'master' as the name for the initial branch. This default branch name 2024-08-07T17:48:41.8480765Z hint: is subject to change. To configure the initial branch name to use in all 2024-08-07T17:48:41.8481468Z hint: of your new repositories, which will suppress this warning, call: 2024-08-07T17:48:41.8482003Z hint: 2024-08-07T17:48:41.8482395Z hint: git config --global init.defaultBranch 2024-08-07T17:48:41.8482834Z hint: 2024-08-07T17:48:41.8483266Z hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and 2024-08-07T17:48:41.8483971Z hint: 'development'. The just-created branch can be renamed via this command: 2024-08-07T17:48:41.8484812Z hint: 2024-08-07T17:48:41.8485172Z hint: git branch -m 2024-08-07T17:48:41.8485823Z Initialized empty Git repository in /home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/ 2024-08-07T17:48:41.8495626Z [command]/usr/bin/git remote add origin https://github.com/pytorch/pytorch 2024-08-07T17:48:41.8533808Z ##[endgroup] 2024-08-07T17:48:41.8534378Z ##[group]Disabling automatic garbage collection 2024-08-07T17:48:41.8538208Z [command]/usr/bin/git config --local gc.auto 0 2024-08-07T17:48:41.8574850Z ##[endgroup] 2024-08-07T17:48:41.8575364Z ##[group]Setting up auth 2024-08-07T17:48:41.8584258Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2024-08-07T17:48:41.8623894Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2024-08-07T17:48:41.8957625Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2024-08-07T17:48:41.8997097Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2024-08-07T17:48:41.9335775Z [command]/usr/bin/git config --local http.https://github.com/.extraheader AUTHORIZATION: basic *** 2024-08-07T17:48:41.9390466Z ##[endgroup] 2024-08-07T17:48:41.9391079Z ##[group]Fetching the repository 2024-08-07T17:48:41.9400068Z [command]/usr/bin/git -c protocol.version=2 fetch --prune --progress --no-recurse-submodules --quiet origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* 2024-08-07T17:48:44.4582225Z remote: Enumerating objects: 1008255 2024-08-07T17:48:44.4582888Z remote: Enumerating objects: 1011035, done. 2024-08-07T17:48:44.4586458Z remote: Counting objects: 0% (1/2780) 2024-08-07T17:48:44.4589094Z remote: Counting objects: 1% (28/2780) 2024-08-07T17:48:44.4591569Z remote: Counting objects: 2% (56/2780) 2024-08-07T17:48:44.4592046Z remote: Counting objects: 3% (84/2780) 2024-08-07T17:48:44.4594348Z remote: Counting objects: 4% (112/2780) 2024-08-07T17:48:44.4594844Z remote: Counting objects: 5% (139/2780) 2024-08-07T17:48:44.4595848Z remote: Counting objects: 6% (167/2780) 2024-08-07T17:48:44.4596335Z remote: Counting objects: 7% (195/2780) 2024-08-07T17:48:44.4597806Z remote: Counting objects: 8% (223/2780) 2024-08-07T17:48:44.4598270Z remote: Counting objects: 9% (251/2780) 2024-08-07T17:48:44.4599083Z remote: Counting objects: 10% (278/2780) 2024-08-07T17:48:44.4599578Z remote: Counting objects: 11% (306/2780) 2024-08-07T17:48:44.4600062Z remote: Counting objects: 12% (334/2780) 2024-08-07T17:48:44.4600538Z remote: Counting objects: 13% (362/2780) 2024-08-07T17:48:44.4601051Z remote: Counting objects: 14% (390/2780) 2024-08-07T17:48:44.4601542Z remote: Counting objects: 15% (417/2780) 2024-08-07T17:48:44.4602003Z remote: Counting objects: 16% (445/2780) 2024-08-07T17:48:44.4602477Z remote: Counting objects: 17% (473/2780) 2024-08-07T17:48:44.4602951Z remote: Counting objects: 18% (501/2780) 2024-08-07T17:48:44.4603409Z remote: Counting objects: 19% (529/2780) 2024-08-07T17:48:44.4603890Z remote: Counting objects: 20% (556/2780) 2024-08-07T17:48:44.4604377Z remote: Counting objects: 21% (584/2780) 2024-08-07T17:48:44.4604839Z remote: Counting objects: 22% (612/2780) 2024-08-07T17:48:44.4605326Z remote: Counting objects: 23% (640/2780) 2024-08-07T17:48:44.4605799Z remote: Counting objects: 24% (668/2780) 2024-08-07T17:48:44.4606254Z remote: Counting objects: 25% (695/2780) 2024-08-07T17:48:44.4606727Z remote: Counting objects: 26% (723/2780) 2024-08-07T17:48:44.4607304Z remote: Counting objects: 27% (751/2780) 2024-08-07T17:48:44.4608003Z remote: Counting objects: 28% (779/2780) 2024-08-07T17:48:44.4608513Z remote: Counting objects: 29% (807/2780) 2024-08-07T17:48:44.4608987Z remote: Counting objects: 30% (834/2780) 2024-08-07T17:48:44.4609483Z remote: Counting objects: 31% (862/2780) 2024-08-07T17:48:44.4609958Z remote: Counting objects: 32% (890/2780) 2024-08-07T17:48:44.4610428Z remote: Counting objects: 33% (918/2780) 2024-08-07T17:48:44.4610931Z remote: Counting objects: 34% (946/2780) 2024-08-07T17:48:44.4611385Z remote: Counting objects: 35% (973/2780) 2024-08-07T17:48:44.4611872Z remote: Counting objects: 36% (1001/2780) 2024-08-07T17:48:44.4612361Z remote: Counting objects: 37% (1029/2780) 2024-08-07T17:48:44.4612845Z remote: Counting objects: 38% (1057/2780) 2024-08-07T17:48:44.4613308Z remote: Counting objects: 39% (1085/2780) 2024-08-07T17:48:44.4613796Z remote: Counting objects: 40% (1112/2780) 2024-08-07T17:48:44.4614281Z remote: Counting objects: 41% (1140/2780) 2024-08-07T17:48:44.4614741Z remote: Counting objects: 42% (1168/2780) 2024-08-07T17:48:44.4615217Z remote: Counting objects: 43% (1196/2780) 2024-08-07T17:48:44.4615697Z remote: Counting objects: 44% (1224/2780) 2024-08-07T17:48:44.4616158Z remote: Counting objects: 45% (1251/2780) 2024-08-07T17:48:44.4616637Z remote: Counting objects: 46% (1279/2780) 2024-08-07T17:48:44.4617125Z remote: Counting objects: 47% (1307/2780) 2024-08-07T17:48:44.4617580Z remote: Counting objects: 48% (1335/2780) 2024-08-07T17:48:44.4618060Z remote: Counting objects: 49% (1363/2780) 2024-08-07T17:48:44.4618539Z remote: Counting objects: 50% (1390/2780) 2024-08-07T17:48:44.4619001Z remote: Counting objects: 51% (1418/2780) 2024-08-07T17:48:44.4619478Z remote: Counting objects: 52% (1446/2780) 2024-08-07T17:48:44.4619963Z remote: Counting objects: 53% (1474/2780) 2024-08-07T17:48:44.4620423Z remote: Counting objects: 54% (1502/2780) 2024-08-07T17:48:44.4620902Z remote: Counting objects: 55% (1529/2780) 2024-08-07T17:48:44.4621382Z remote: Counting objects: 56% (1557/2780) 2024-08-07T17:48:44.4621839Z remote: Counting objects: 57% (1585/2780) 2024-08-07T17:48:44.4622316Z remote: Counting objects: 58% (1613/2780) 2024-08-07T17:48:44.4622787Z remote: Counting objects: 59% (1641/2780) 2024-08-07T17:48:44.4623369Z remote: Counting objects: 60% (1668/2780) 2024-08-07T17:48:44.4623841Z remote: Counting objects: 61% (1696/2780) 2024-08-07T17:48:44.4624316Z remote: Counting objects: 62% (1724/2780) 2024-08-07T17:48:44.4624824Z remote: Counting objects: 63% (1752/2780) 2024-08-07T17:48:44.4625286Z remote: Counting objects: 64% (1780/2780) 2024-08-07T17:48:44.4625763Z remote: Counting objects: 65% (1807/2780) 2024-08-07T17:48:44.4626242Z remote: Counting objects: 66% (1835/2780) 2024-08-07T17:48:44.4626698Z remote: Counting objects: 67% (1863/2780) 2024-08-07T17:48:44.4627177Z remote: Counting objects: 68% (1891/2780) 2024-08-07T17:48:44.4627650Z remote: Counting objects: 69% (1919/2780) 2024-08-07T17:48:44.4628104Z remote: Counting objects: 70% (1946/2780) 2024-08-07T17:48:44.4628615Z remote: Counting objects: 71% (1974/2780) 2024-08-07T17:48:44.4629092Z remote: Counting objects: 72% (2002/2780) 2024-08-07T17:48:44.4629563Z remote: Counting objects: 73% (2030/2780) 2024-08-07T17:48:44.4630062Z remote: Counting objects: 74% (2058/2780) 2024-08-07T17:48:44.4630532Z remote: Counting objects: 75% (2085/2780) 2024-08-07T17:48:44.4630990Z remote: Counting objects: 76% (2113/2780) 2024-08-07T17:48:44.4631466Z remote: Counting objects: 77% (2141/2780) 2024-08-07T17:48:44.4631939Z remote: Counting objects: 78% (2169/2780) 2024-08-07T17:48:44.4632496Z remote: Counting objects: 79% (2197/2780) 2024-08-07T17:48:44.4633000Z remote: Counting objects: 80% (2224/2780) 2024-08-07T17:48:44.4633479Z remote: Counting objects: 81% (2252/2780) 2024-08-07T17:48:44.4633959Z remote: Counting objects: 82% (2280/2780) 2024-08-07T17:48:44.4634416Z remote: Counting objects: 83% (2308/2780) 2024-08-07T17:48:44.4634897Z remote: Counting objects: 84% (2336/2780) 2024-08-07T17:48:44.4635389Z remote: Counting objects: 85% (2363/2780) 2024-08-07T17:48:44.4635848Z remote: Counting objects: 86% (2391/2780) 2024-08-07T17:48:44.4636327Z remote: Counting objects: 87% (2419/2780) 2024-08-07T17:48:44.4636804Z remote: Counting objects: 88% (2447/2780) 2024-08-07T17:48:44.4637266Z remote: Counting objects: 89% (2475/2780) 2024-08-07T17:48:44.4637744Z remote: Counting objects: 90% (2502/2780) 2024-08-07T17:48:44.4638222Z remote: Counting objects: 91% (2530/2780) 2024-08-07T17:48:44.4638690Z remote: Counting objects: 92% (2558/2780) 2024-08-07T17:48:44.4639169Z remote: Counting objects: 93% (2586/2780) 2024-08-07T17:48:44.4639648Z remote: Counting objects: 94% (2614/2780) 2024-08-07T17:48:44.4640108Z remote: Counting objects: 95% (2641/2780) 2024-08-07T17:48:44.4640585Z remote: Counting objects: 96% (2669/2780) 2024-08-07T17:48:44.4641062Z remote: Counting objects: 97% (2697/2780) 2024-08-07T17:48:44.4641537Z remote: Counting objects: 98% (2725/2780) 2024-08-07T17:48:44.4642019Z remote: Counting objects: 99% (2753/2780) 2024-08-07T17:48:44.4642495Z remote: Counting objects: 100% (2780/2780) 2024-08-07T17:48:44.4642993Z remote: Counting objects: 100% (2780/2780), done. 2024-08-07T17:48:44.5015640Z remote: Compressing objects: 0% (1/1516) 2024-08-07T17:48:44.5393288Z remote: Compressing objects: 1% (16/1516) 2024-08-07T17:48:44.6610709Z remote: Compressing objects: 2% (31/1516) 2024-08-07T17:48:44.9103203Z remote: Compressing objects: 3% (46/1516) 2024-08-07T17:48:45.0980198Z remote: Compressing objects: 4% (61/1516) 2024-08-07T17:48:45.2060750Z remote: Compressing objects: 5% (76/1516) 2024-08-07T17:48:45.3098887Z remote: Compressing objects: 6% (91/1516) 2024-08-07T17:48:45.3748830Z remote: Compressing objects: 7% (107/1516) 2024-08-07T17:48:45.4340091Z remote: Compressing objects: 8% (122/1516) 2024-08-07T17:48:45.4660830Z remote: Compressing objects: 9% (137/1516) 2024-08-07T17:48:45.4722905Z remote: Compressing objects: 9% (150/1516) 2024-08-07T17:48:45.5081849Z remote: Compressing objects: 10% (152/1516) 2024-08-07T17:48:45.5416110Z remote: Compressing objects: 11% (167/1516) 2024-08-07T17:48:45.5747509Z remote: Compressing objects: 12% (182/1516) 2024-08-07T17:48:45.5963545Z remote: Compressing objects: 13% (198/1516) 2024-08-07T17:48:45.6228143Z remote: Compressing objects: 14% (213/1516) 2024-08-07T17:48:45.6445640Z remote: Compressing objects: 15% (228/1516) 2024-08-07T17:48:45.6576716Z remote: Compressing objects: 16% (243/1516) 2024-08-07T17:48:45.6682084Z remote: Compressing objects: 17% (258/1516) 2024-08-07T17:48:45.6765792Z remote: Compressing objects: 18% (273/1516) 2024-08-07T17:48:45.6845354Z remote: Compressing objects: 19% (289/1516) 2024-08-07T17:48:45.6888674Z remote: Compressing objects: 20% (304/1516) 2024-08-07T17:48:45.6916547Z remote: Compressing objects: 21% (319/1516) 2024-08-07T17:48:45.6927475Z remote: Compressing objects: 22% (334/1516) 2024-08-07T17:48:45.6941703Z remote: Compressing objects: 23% (349/1516) 2024-08-07T17:48:45.6960809Z remote: Compressing objects: 24% (364/1516) 2024-08-07T17:48:45.6978082Z remote: Compressing objects: 25% (379/1516) 2024-08-07T17:48:45.6996605Z remote: Compressing objects: 26% (395/1516) 2024-08-07T17:48:45.7015754Z remote: Compressing objects: 27% (410/1516) 2024-08-07T17:48:45.7035095Z remote: Compressing objects: 28% (425/1516) 2024-08-07T17:48:45.7050778Z remote: Compressing objects: 29% (440/1516) 2024-08-07T17:48:45.7070771Z remote: Compressing objects: 30% (455/1516) 2024-08-07T17:48:45.7096995Z remote: Compressing objects: 31% (470/1516) 2024-08-07T17:48:45.7114260Z remote: Compressing objects: 32% (486/1516) 2024-08-07T17:48:45.7128014Z remote: Compressing objects: 33% (501/1516) 2024-08-07T17:48:45.7144492Z remote: Compressing objects: 34% (516/1516) 2024-08-07T17:48:45.7155580Z remote: Compressing objects: 35% (531/1516) 2024-08-07T17:48:45.7167076Z remote: Compressing objects: 36% (546/1516) 2024-08-07T17:48:45.7178790Z remote: Compressing objects: 37% (561/1516) 2024-08-07T17:48:45.7188184Z remote: Compressing objects: 38% (577/1516) 2024-08-07T17:48:45.7200732Z remote: Compressing objects: 39% (592/1516) 2024-08-07T17:48:45.7207855Z remote: Compressing objects: 40% (607/1516) 2024-08-07T17:48:45.7223169Z remote: Compressing objects: 41% (622/1516) 2024-08-07T17:48:45.7231345Z remote: Compressing objects: 42% (637/1516) 2024-08-07T17:48:45.7240860Z remote: Compressing objects: 43% (652/1516) 2024-08-07T17:48:45.7250931Z remote: Compressing objects: 44% (668/1516) 2024-08-07T17:48:45.7256895Z remote: Compressing objects: 45% (683/1516) 2024-08-07T17:48:45.7261751Z remote: Compressing objects: 46% (698/1516) 2024-08-07T17:48:45.7269438Z remote: Compressing objects: 47% (713/1516) 2024-08-07T17:48:45.7275907Z remote: Compressing objects: 48% (728/1516) 2024-08-07T17:48:45.7279462Z remote: Compressing objects: 49% (743/1516) 2024-08-07T17:48:45.7286475Z remote: Compressing objects: 50% (758/1516) 2024-08-07T17:48:45.7289058Z remote: Compressing objects: 51% (774/1516) 2024-08-07T17:48:45.7293379Z remote: Compressing objects: 52% (789/1516) 2024-08-07T17:48:45.7297054Z remote: Compressing objects: 53% (804/1516) 2024-08-07T17:48:45.7299757Z remote: Compressing objects: 54% (819/1516) 2024-08-07T17:48:45.7302644Z remote: Compressing objects: 55% (834/1516) 2024-08-07T17:48:45.7304681Z remote: Compressing objects: 56% (849/1516) 2024-08-07T17:48:45.7306387Z remote: Compressing objects: 57% (865/1516) 2024-08-07T17:48:45.7307816Z remote: Compressing objects: 58% (880/1516) 2024-08-07T17:48:45.7308951Z remote: Compressing objects: 59% (895/1516) 2024-08-07T17:48:45.7310089Z remote: Compressing objects: 60% (910/1516) 2024-08-07T17:48:45.7311704Z remote: Compressing objects: 61% (925/1516) 2024-08-07T17:48:45.7312209Z remote: Compressing objects: 62% (940/1516) 2024-08-07T17:48:45.7312699Z remote: Compressing objects: 63% (956/1516) 2024-08-07T17:48:45.7314272Z remote: Compressing objects: 64% (971/1516) 2024-08-07T17:48:45.7314788Z remote: Compressing objects: 65% (986/1516) 2024-08-07T17:48:45.7338374Z remote: Compressing objects: 66% (1001/1516) 2024-08-07T17:48:45.7352785Z remote: Compressing objects: 67% (1016/1516) 2024-08-07T17:48:45.7364725Z remote: Compressing objects: 68% (1031/1516) 2024-08-07T17:48:45.7375899Z remote: Compressing objects: 69% (1047/1516) 2024-08-07T17:48:45.7385838Z remote: Compressing objects: 70% (1062/1516) 2024-08-07T17:48:45.7393908Z remote: Compressing objects: 71% (1077/1516) 2024-08-07T17:48:45.7400401Z remote: Compressing objects: 72% (1092/1516) 2024-08-07T17:48:45.7406737Z remote: Compressing objects: 73% (1107/1516) 2024-08-07T17:48:45.7410740Z remote: Compressing objects: 74% (1122/1516) 2024-08-07T17:48:45.7416483Z remote: Compressing objects: 75% (1137/1516) 2024-08-07T17:48:45.7421216Z remote: Compressing objects: 76% (1153/1516) 2024-08-07T17:48:45.7425975Z remote: Compressing objects: 77% (1168/1516) 2024-08-07T17:48:45.7430798Z remote: Compressing objects: 78% (1183/1516) 2024-08-07T17:48:45.7435756Z remote: Compressing objects: 79% (1198/1516) 2024-08-07T17:48:45.7439816Z remote: Compressing objects: 80% (1213/1516) 2024-08-07T17:48:45.7444260Z remote: Compressing objects: 81% (1228/1516) 2024-08-07T17:48:45.7448884Z remote: Compressing objects: 82% (1244/1516) 2024-08-07T17:48:45.7452831Z remote: Compressing objects: 83% (1259/1516) 2024-08-07T17:48:45.7456754Z remote: Compressing objects: 84% (1274/1516) 2024-08-07T17:48:45.7460604Z remote: Compressing objects: 85% (1289/1516) 2024-08-07T17:48:45.7463820Z remote: Compressing objects: 86% (1304/1516) 2024-08-07T17:48:45.7466498Z remote: Compressing objects: 87% (1319/1516) 2024-08-07T17:48:45.7469412Z remote: Compressing objects: 88% (1335/1516) 2024-08-07T17:48:45.7472288Z remote: Compressing objects: 89% (1350/1516) 2024-08-07T17:48:45.7475271Z remote: Compressing objects: 90% (1365/1516) 2024-08-07T17:48:45.7477667Z remote: Compressing objects: 91% (1380/1516) 2024-08-07T17:48:45.7480415Z remote: Compressing objects: 92% (1395/1516) 2024-08-07T17:48:45.7483150Z remote: Compressing objects: 93% (1410/1516) 2024-08-07T17:48:45.7485382Z remote: Compressing objects: 94% (1426/1516) 2024-08-07T17:48:45.7487905Z remote: Compressing objects: 95% (1441/1516) 2024-08-07T17:48:45.7490136Z remote: Compressing objects: 96% (1456/1516) 2024-08-07T17:48:45.7492350Z remote: Compressing objects: 97% (1471/1516) 2024-08-07T17:48:45.7494597Z remote: Compressing objects: 98% (1486/1516) 2024-08-07T17:48:45.7496667Z remote: Compressing objects: 99% (1501/1516) 2024-08-07T17:48:45.7497313Z remote: Compressing objects: 100% (1516/1516) 2024-08-07T17:48:45.7498100Z remote: Compressing objects: 100% (1516/1516), done. 2024-08-07T17:49:13.8813485Z remote: Total 1011035 (delta 1758), reused 2106 (delta 1257), pack-reused 1008255 2024-08-07T17:49:40.7410538Z [command]/usr/bin/git rev-parse --verify --quiet 016588f53c6904b840aa56aa86f95460b4d9c996^{object} 2024-08-07T17:49:40.7444697Z 016588f53c6904b840aa56aa86f95460b4d9c996 2024-08-07T17:49:40.7451235Z ##[endgroup] 2024-08-07T17:49:40.7451786Z ##[group]Determining the checkout info 2024-08-07T17:49:40.7452813Z ##[endgroup] 2024-08-07T17:49:40.7453326Z ##[group]Checking out the ref 2024-08-07T17:49:40.7457495Z [command]/usr/bin/git checkout --quiet --force 016588f53c6904b840aa56aa86f95460b4d9c996 2024-08-07T17:49:42.7168318Z ##[endgroup] 2024-08-07T17:49:42.7168906Z ##[group]Setting up auth for fetching submodules 2024-08-07T17:49:42.7174651Z [command]/usr/bin/git config --global http.https://github.com/.extraheader AUTHORIZATION: basic *** 2024-08-07T17:49:42.7242641Z [command]/usr/bin/git config --global --unset-all url.https://github.com/.insteadOf 2024-08-07T17:49:42.7279527Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf git@github.com: 2024-08-07T17:49:42.7318899Z [command]/usr/bin/git config --global --add url.https://github.com/.insteadOf org-21003710@github.com: 2024-08-07T17:49:42.7353968Z ##[endgroup] 2024-08-07T17:49:42.7354495Z ##[group]Fetching submodules 2024-08-07T17:49:42.7359857Z [command]/usr/bin/git submodule sync --recursive 2024-08-07T17:49:42.7735344Z [command]/usr/bin/git -c protocol.version=2 submodule update --init --force --recursive 2024-08-07T17:49:42.8098409Z Submodule 'android/libs/fbjni' (https://github.com/facebookincubator/fbjni.git) registered for path 'android/libs/fbjni' 2024-08-07T17:49:42.8101202Z Submodule 'third_party/NNPACK_deps/FP16' (https://github.com/Maratyszcza/FP16.git) registered for path 'third_party/FP16' 2024-08-07T17:49:42.8105092Z Submodule 'third_party/NNPACK_deps/FXdiv' (https://github.com/Maratyszcza/FXdiv.git) registered for path 'third_party/FXdiv' 2024-08-07T17:49:42.8108689Z Submodule 'third_party/NNPACK' (https://github.com/Maratyszcza/NNPACK.git) registered for path 'third_party/NNPACK' 2024-08-07T17:49:42.8112872Z Submodule 'third_party/VulkanMemoryAllocator' (https://github.com/GPUOpen-LibrariesAndSDKs/VulkanMemoryAllocator.git) registered for path 'third_party/VulkanMemoryAllocator' 2024-08-07T17:49:42.8116389Z Submodule 'third_party/XNNPACK' (https://github.com/google/XNNPACK.git) registered for path 'third_party/XNNPACK' 2024-08-07T17:49:42.8120564Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/benchmark' 2024-08-07T17:49:42.8124925Z Submodule 'third_party/cpp-httplib' (https://github.com/yhirose/cpp-httplib.git) registered for path 'third_party/cpp-httplib' 2024-08-07T17:49:42.8129198Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo.git) registered for path 'third_party/cpuinfo' 2024-08-07T17:49:42.8133762Z Submodule 'third_party/cudnn_frontend' (https://github.com/NVIDIA/cudnn-frontend.git) registered for path 'third_party/cudnn_frontend' 2024-08-07T17:49:42.8138145Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/cutlass' 2024-08-07T17:49:42.8143217Z Submodule 'third_party/eigen' (https://gitlab.com/libeigen/eigen.git) registered for path 'third_party/eigen' 2024-08-07T17:49:42.8148167Z Submodule 'third_party/fbgemm' (https://github.com/pytorch/fbgemm) registered for path 'third_party/fbgemm' 2024-08-07T17:49:42.8153353Z Submodule 'third_party/flatbuffers' (https://github.com/google/flatbuffers.git) registered for path 'third_party/flatbuffers' 2024-08-07T17:49:42.8158317Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/fmt' 2024-08-07T17:49:42.8163636Z Submodule 'third_party/foxi' (https://github.com/houseroad/foxi.git) registered for path 'third_party/foxi' 2024-08-07T17:49:42.8171327Z Submodule 'third_party/gemmlowp/gemmlowp' (https://github.com/google/gemmlowp.git) registered for path 'third_party/gemmlowp/gemmlowp' 2024-08-07T17:49:42.8176763Z Submodule 'third_party/gloo' (https://github.com/facebookincubator/gloo) registered for path 'third_party/gloo' 2024-08-07T17:49:42.8182343Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/googletest' 2024-08-07T17:49:42.8188202Z Submodule 'third_party/ideep' (https://github.com/intel/ideep) registered for path 'third_party/ideep' 2024-08-07T17:49:42.8194111Z Submodule 'third_party/ittapi' (https://github.com/intel/ittapi.git) registered for path 'third_party/ittapi' 2024-08-07T17:49:42.8200765Z Submodule 'third_party/kineto' (https://github.com/pytorch/kineto) registered for path 'third_party/kineto' 2024-08-07T17:49:42.8206892Z Submodule 'third_party/mimalloc' (https://github.com/microsoft/mimalloc.git) registered for path 'third_party/mimalloc' 2024-08-07T17:49:42.8212972Z Submodule 'third_party/nccl/nccl' (https://github.com/NVIDIA/nccl) registered for path 'third_party/nccl/nccl' 2024-08-07T17:49:42.8219337Z Submodule 'third_party/nlohmann' (https://github.com/nlohmann/json.git) registered for path 'third_party/nlohmann' 2024-08-07T17:49:42.8225389Z Submodule 'third_party/onnx' (https://github.com/onnx/onnx.git) registered for path 'third_party/onnx' 2024-08-07T17:49:42.8232272Z Submodule 'third_party/opentelemetry-cpp' (https://github.com/open-telemetry/opentelemetry-cpp.git) registered for path 'third_party/opentelemetry-cpp' 2024-08-07T17:49:42.8238782Z Submodule 'third_party/pocketfft' (https://github.com/mreineck/pocketfft) registered for path 'third_party/pocketfft' 2024-08-07T17:49:42.8245573Z Submodule 'third_party/protobuf' (https://github.com/protocolbuffers/protobuf.git) registered for path 'third_party/protobuf' 2024-08-07T17:49:42.8252216Z Submodule 'third_party/NNPACK_deps/psimd' (https://github.com/Maratyszcza/psimd.git) registered for path 'third_party/psimd' 2024-08-07T17:49:42.8259639Z Submodule 'third_party/NNPACK_deps/pthreadpool' (https://github.com/Maratyszcza/pthreadpool.git) registered for path 'third_party/pthreadpool' 2024-08-07T17:49:42.8266331Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/pybind11' 2024-08-07T17:49:42.8273639Z Submodule 'third_party/python-peachpy' (https://github.com/malfet/PeachPy.git) registered for path 'third_party/python-peachpy' 2024-08-07T17:49:42.8282741Z Submodule 'third_party/sleef' (https://github.com/shibatch/sleef) registered for path 'third_party/sleef' 2024-08-07T17:49:42.8290287Z Submodule 'third_party/tensorpipe' (https://github.com/pytorch/tensorpipe.git) registered for path 'third_party/tensorpipe' 2024-08-07T17:49:42.8325353Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/android/libs/fbjni'... 2024-08-07T17:49:43.1955578Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FP16'... 2024-08-07T17:49:43.4022640Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/FXdiv'... 2024-08-07T17:49:43.5948786Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/NNPACK'... 2024-08-07T17:49:43.9106243Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/VulkanMemoryAllocator'... 2024-08-07T17:49:46.0023063Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/XNNPACK'... 2024-08-07T17:50:00.0414981Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/benchmark'... 2024-08-07T17:50:00.4923851Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpp-httplib'... 2024-08-07T17:50:01.0117469Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cpuinfo'... 2024-08-07T17:50:01.6905734Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cudnn_frontend'... 2024-08-07T17:50:03.2318670Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/cutlass'... 2024-08-07T17:50:05.6033196Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/eigen'... 2024-08-07T17:50:12.6239053Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm'... 2024-08-07T17:50:14.1843162Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/flatbuffers'... 2024-08-07T17:50:16.2247920Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fmt'... 2024-08-07T17:50:17.6164257Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/foxi'... 2024-08-07T17:50:17.8037202Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gemmlowp/gemmlowp'... 2024-08-07T17:50:18.2727172Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/gloo'... 2024-08-07T17:50:18.6516183Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/googletest'... 2024-08-07T17:50:19.9466792Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep'... 2024-08-07T17:50:20.5849153Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ittapi'... 2024-08-07T17:50:20.9149554Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto'... 2024-08-07T17:50:22.8586144Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/mimalloc'... 2024-08-07T17:50:23.7121388Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/nccl/nccl'... 2024-08-07T17:50:25.5358341Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/nlohmann'... 2024-08-07T17:50:32.9590003Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx'... 2024-08-07T17:50:35.3599602Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp'... 2024-08-07T17:50:40.5573923Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pocketfft'... 2024-08-07T17:50:40.8130545Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf'... 2024-08-07T17:50:51.1935292Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/psimd'... 2024-08-07T17:50:51.3828782Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pthreadpool'... 2024-08-07T17:50:51.6610664Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/pybind11'... 2024-08-07T17:50:52.6875625Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/python-peachpy'... 2024-08-07T17:50:53.0142870Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/sleef'... 2024-08-07T17:50:53.7310490Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe'... 2024-08-07T17:50:54.2360513Z Submodule path 'android/libs/fbjni': checked out '7e1e1fe3858c63c251c637ae41a20de425dde96f' 2024-08-07T17:50:54.2514295Z Submodule path 'third_party/FP16': checked out '4dfe081cf6bcd15db339cf2680b9281b8451eeb3' 2024-08-07T17:50:54.2629492Z Submodule path 'third_party/FXdiv': checked out 'b408327ac2a15ec3e43352421954f5b1967701d1' 2024-08-07T17:50:54.2968708Z Submodule path 'third_party/NNPACK': checked out 'c07e3a0400713d546e0dea2d5466dd22ea389c73' 2024-08-07T17:50:54.3478718Z Submodule path 'third_party/VulkanMemoryAllocator': checked out 'a6bfc237255a6bac1513f7c1ebde6d8aed6b5191' 2024-08-07T17:50:55.7265101Z Submodule path 'third_party/XNNPACK': checked out 'fcbf55af6cf28a4627bcd1f703ab7ad843f0f3a2' 2024-08-07T17:50:55.7591933Z Submodule path 'third_party/benchmark': checked out '0d98dba29d66e93259db7daa53a9327df767a415' 2024-08-07T17:50:55.8165562Z Submodule path 'third_party/cpp-httplib': checked out '3b6597bba913d51161383657829b7e644e59c006' 2024-08-07T17:50:55.9425733Z Submodule path 'third_party/cpuinfo': checked out '3c8b1533ac03dd6531ab6e7b9245d488f13a82a5' 2024-08-07T17:50:55.9864806Z Submodule path 'third_party/cudnn_frontend': checked out '98ca4e1941fe3263f128f74f10063a3ea35c7019' 2024-08-07T17:50:56.6816114Z Submodule path 'third_party/cutlass': checked out 'bbe579a9e3beb6ea6626d9227ec32d0dae119a49' 2024-08-07T17:50:57.0213437Z Submodule path 'third_party/eigen': checked out '3147391d946bb4b6c68edd901f2add6ac1f31f8c' 2024-08-07T17:50:57.1285549Z Submodule path 'third_party/fbgemm': checked out 'dbc3157bf256f1339b3fa1fef2be89ac4078be0e' 2024-08-07T17:50:57.1308705Z Submodule 'third_party/asmjit' (https://github.com/asmjit/asmjit.git) registered for path 'third_party/fbgemm/third_party/asmjit' 2024-08-07T17:50:57.1312704Z Submodule 'third_party/cpuinfo' (https://github.com/pytorch/cpuinfo) registered for path 'third_party/fbgemm/third_party/cpuinfo' 2024-08-07T17:50:57.1316514Z Submodule 'third_party/cutlass' (https://github.com/NVIDIA/cutlass.git) registered for path 'third_party/fbgemm/third_party/cutlass' 2024-08-07T17:50:57.1320651Z Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/fbgemm/third_party/googletest' 2024-08-07T17:50:57.1324824Z Submodule 'third_party/hipify_torch' (https://github.com/ROCmSoftwarePlatform/hipify_torch.git) registered for path 'third_party/fbgemm/third_party/hipify_torch' 2024-08-07T17:50:57.1358145Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/asmjit'... 2024-08-07T17:50:58.3607335Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/cpuinfo'... 2024-08-07T17:50:59.0971259Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/cutlass'... 2024-08-07T17:51:01.4476920Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/googletest'... 2024-08-07T17:51:02.7512102Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/fbgemm/third_party/hipify_torch'... 2024-08-07T17:51:03.1440994Z Submodule path 'third_party/fbgemm/third_party/asmjit': checked out 'd3fbf7c9bc7c1d1365a94a45614b91c5a3706b81' 2024-08-07T17:51:03.2694684Z Submodule path 'third_party/fbgemm/third_party/cpuinfo': checked out 'ed8b86a253800bafdb7b25c5c399f91bff9cb1f3' 2024-08-07T17:51:03.8481110Z Submodule path 'third_party/fbgemm/third_party/cutlass': checked out 'fc9ebc645b63f3a6bc80aaefde5c063fb72110d6' 2024-08-07T17:51:03.9295940Z Submodule path 'third_party/fbgemm/third_party/googletest': checked out 'cbf019de22c8dd37b2108da35b2748fd702d1796' 2024-08-07T17:51:03.9453862Z Submodule path 'third_party/fbgemm/third_party/hipify_torch': checked out '23f53b025b466d8ec3c45d52290d3442f7fbe6b1' 2024-08-07T17:51:04.1080708Z Submodule path 'third_party/flatbuffers': checked out '01834de25e4bf3975a9a00e816292b1ad0fe184b' 2024-08-07T17:51:04.1601557Z Submodule path 'third_party/fmt': checked out '0c9fce2ffefecfdce794e1859584e25877b7b592' 2024-08-07T17:51:04.1721923Z Submodule path 'third_party/foxi': checked out 'c278588e34e535f0bb8f00df3880d26928038cad' 2024-08-07T17:51:04.2238412Z Submodule path 'third_party/gemmlowp/gemmlowp': checked out '3fb5c176c17c765a3492cd2f0321b0dab712f350' 2024-08-07T17:51:04.2592557Z Submodule path 'third_party/gloo': checked out '5354032ea08eadd7fc4456477f7f7c6308818509' 2024-08-07T17:51:04.3179694Z Submodule path 'third_party/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2024-08-07T17:51:04.3343103Z Submodule path 'third_party/ideep': checked out '55ca0191687aaf19aca5cdb7881c791e3bea442b' 2024-08-07T17:51:04.3363091Z Submodule 'mkl-dnn' (https://github.com/intel/mkl-dnn.git) registered for path 'third_party/ideep/mkl-dnn' 2024-08-07T17:51:04.3392961Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/ideep/mkl-dnn'... 2024-08-07T17:51:20.0334107Z Submodule path 'third_party/ideep/mkl-dnn': checked out '1137e04ec0b5251ca2b4400a4fd3c667ce843d67' 2024-08-07T17:51:20.0563165Z Submodule path 'third_party/ittapi': checked out '5b8a7d7422611c3a0d799fb5fc5dd4abfae35b42' 2024-08-07T17:51:20.1703816Z Submodule path 'third_party/kineto': checked out 'da2f2682cabaf95d601fa2a9b7e0979f84fe7667' 2024-08-07T17:51:20.1726502Z Submodule 'libkineto/third_party/dynolog' (https://github.com/facebookincubator/dynolog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog' 2024-08-07T17:51:20.1730264Z Submodule 'libkineto/third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/fmt' 2024-08-07T17:51:20.1734254Z Submodule 'libkineto/third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/googletest' 2024-08-07T17:51:20.1766235Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog'... 2024-08-07T17:51:20.8355078Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/fmt'... 2024-08-07T17:51:22.2434203Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/googletest'... 2024-08-07T17:51:23.6081059Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog': checked out '7d04a0053a845370ae06ce317a22a48e9edcc74e' 2024-08-07T17:51:23.6102507Z Submodule 'third_party/DCGM' (https://github.com/NVIDIA/DCGM.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2024-08-07T17:51:23.6106278Z Submodule 'third_party/cpr' (https://github.com/libcpr/cpr.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2024-08-07T17:51:23.6111033Z Submodule 'third_party/fmt' (https://github.com/fmtlib/fmt.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2024-08-07T17:51:23.6114580Z Submodule 'third_party/gflags' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2024-08-07T17:51:23.6119423Z Submodule 'third_party/glog' (https://github.com/google/glog.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2024-08-07T17:51:23.6123364Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2024-08-07T17:51:23.6127613Z Submodule 'third_party/json' (https://github.com/nlohmann/json.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2024-08-07T17:51:23.6132090Z Submodule 'third_party/pfs' (https://github.com/dtrugman/pfs.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2024-08-07T17:51:23.6164154Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM'... 2024-08-07T17:51:24.7225949Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/cpr'... 2024-08-07T17:51:25.1352259Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/fmt'... 2024-08-07T17:51:26.5473333Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags'... 2024-08-07T17:51:27.0038029Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/glog'... 2024-08-07T17:51:27.8257556Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/googletest'... 2024-08-07T17:51:29.1856546Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/json'... 2024-08-07T17:51:36.5227511Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/pfs'... 2024-08-07T17:51:37.0012275Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM': checked out 'ffde4e54bc7249a6039a5e6b45b395141e1217f9' 2024-08-07T17:51:37.0259803Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr': checked out '871ed52d350214a034f6ef8a3b8f51c5ce1bd400' 2024-08-07T17:51:37.0749317Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt': checked out 'cd4af11efc9c622896a3e4cb599fa28668ca3d05' 2024-08-07T17:51:37.0929900Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags': checked out 'e171aa2d15ed9eb17054558e0b3a6a413bb01067' 2024-08-07T17:51:37.0952718Z Submodule 'doc' (https://github.com/gflags/gflags.git) registered for path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2024-08-07T17:51:37.0985040Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc'... 2024-08-07T17:51:37.4123994Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc': checked out '8411df715cf522606e3b1aca386ddfc0b63d34b4' 2024-08-07T17:51:37.4369552Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog': checked out 'b33e3bad4c46c8a6345525fd822af355e5ef9446' 2024-08-07T17:51:37.4906903Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest': checked out '58d77fa8070e8cec2dc1ed015d66b454c8d78850' 2024-08-07T17:51:37.6233381Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/json': checked out '4f8fba14066156b73f1189a2b8bd568bde5284c5' 2024-08-07T17:51:37.6447886Z Submodule path 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs': checked out 'f68a2fa8ea36c783bdd760371411fcb495aa3150' 2024-08-07T17:51:37.6953223Z Submodule path 'third_party/kineto/libkineto/third_party/fmt': checked out '0041a40c1350ba702d475b9c4ad62da77caea164' 2024-08-07T17:51:37.7712605Z Submodule path 'third_party/kineto/libkineto/third_party/googletest': checked out '7aca84427f224eeed3144123d5230d5871e93347' 2024-08-07T17:51:37.8235740Z Submodule path 'third_party/mimalloc': checked out 'b66e3214d8a104669c2ec05ae91ebc26a8f5ab78' 2024-08-07T17:51:37.8561201Z Submodule path 'third_party/nccl/nccl': checked out 'ab2b89c4c339bd7f816fbc114a4b05d386b66290' 2024-08-07T17:51:37.9969904Z Submodule path 'third_party/nlohmann': checked out '87cda1d6646592ac5866dc703c8e1839046a6806' 2024-08-07T17:51:38.5486867Z Submodule path 'third_party/onnx': checked out '3bf92c03a9f27eba3bda1e5b9e63ea20ec213557' 2024-08-07T17:51:38.5528407Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/onnx/third_party/benchmark' 2024-08-07T17:51:38.5532028Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/onnx/third_party/pybind11' 2024-08-07T17:51:38.5564877Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/benchmark'... 2024-08-07T17:51:39.0699862Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/onnx/third_party/pybind11'... 2024-08-07T17:51:40.1536800Z Submodule path 'third_party/onnx/third_party/benchmark': checked out '2dd015dfef425c866d9a43f2c67d8b52d709acb6' 2024-08-07T17:51:40.1997081Z Submodule path 'third_party/onnx/third_party/pybind11': checked out '5b0a6fc2017fcc176545afe3e09c9f9885283242' 2024-08-07T17:51:40.2985937Z Submodule path 'third_party/opentelemetry-cpp': checked out 'a799f4aed9c94b765dcdaabaeab7d5e7e2310878' 2024-08-07T17:51:40.3012312Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark) registered for path 'third_party/opentelemetry-cpp/third_party/benchmark' 2024-08-07T17:51:40.3016240Z Submodule 'third_party/googletest' (https://github.com/google/googletest) registered for path 'third_party/opentelemetry-cpp/third_party/googletest' 2024-08-07T17:51:40.3020299Z Submodule 'third_party/ms-gsl' (https://github.com/microsoft/GSL) registered for path 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2024-08-07T17:51:40.3024585Z Submodule 'third_party/nlohmann-json' (https://github.com/nlohmann/json) registered for path 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2024-08-07T17:51:40.3029168Z Submodule 'third_party/opentelemetry-proto' (https://github.com/open-telemetry/opentelemetry-proto) registered for path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2024-08-07T17:51:40.3033585Z Submodule 'third_party/opentracing-cpp' (https://github.com/opentracing/opentracing-cpp.git) registered for path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2024-08-07T17:51:40.3038202Z Submodule 'third_party/prometheus-cpp' (https://github.com/jupp0r/prometheus-cpp) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2024-08-07T17:51:40.3042926Z Submodule 'tools/vcpkg' (https://github.com/Microsoft/vcpkg) registered for path 'third_party/opentelemetry-cpp/tools/vcpkg' 2024-08-07T17:51:40.3075742Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/benchmark'... 2024-08-07T17:51:40.9251247Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/googletest'... 2024-08-07T17:51:42.2506076Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/ms-gsl'... 2024-08-07T17:51:42.6086973Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/nlohmann-json'... 2024-08-07T17:51:49.9577841Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentelemetry-proto'... 2024-08-07T17:51:50.3536085Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/opentracing-cpp'... 2024-08-07T17:51:50.7091522Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp'... 2024-08-07T17:51:51.0339335Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/tools/vcpkg'... 2024-08-07T17:51:58.5691584Z Submodule path 'third_party/opentelemetry-cpp/third_party/benchmark': checked out 'd572f4777349d43653b21d6c2fc63020ab326db2' 2024-08-07T17:51:58.6219933Z Submodule path 'third_party/opentelemetry-cpp/third_party/googletest': checked out 'b796f7d44681514f58a683a3a71ff17c94edb0c1' 2024-08-07T17:51:58.6420230Z Submodule path 'third_party/opentelemetry-cpp/third_party/ms-gsl': checked out '6f4529395c5b7c2d661812257cd6780c67e54afa' 2024-08-07T17:51:58.7772545Z Submodule path 'third_party/opentelemetry-cpp/third_party/nlohmann-json': checked out 'bc889afb4c5bf1c0d8ee29ef35eaaf4c8bef8a5d' 2024-08-07T17:51:58.7941460Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto': checked out '4ca4f0335c63cda7ab31ea7ed70d6553aee14dce' 2024-08-07T17:51:58.8136619Z Submodule path 'third_party/opentelemetry-cpp/third_party/opentracing-cpp': checked out '06b57f48ded1fa3bdd3d4346f6ef29e40e08eaf5' 2024-08-07T17:51:58.8349496Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp': checked out 'c9ffcdda9086ffd9e1283ea7a0276d831f3c8a8d' 2024-08-07T17:51:58.8369677Z Submodule 'civetweb' (https://github.com/civetweb/civetweb.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2024-08-07T17:51:58.8373556Z Submodule 'googletest' (https://github.com/google/googletest.git) registered for path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2024-08-07T17:51:58.8405462Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb'... 2024-08-07T17:52:01.1981825Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest'... 2024-08-07T17:52:02.8716262Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb': checked out 'eefb26f82b233268fc98577d265352720d477ba4' 2024-08-07T17:52:02.9319452Z Submodule path 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest': checked out 'e2239ee6043f73722e7aa812a459f54a28552929' 2024-08-07T17:52:03.6394364Z Submodule path 'third_party/opentelemetry-cpp/tools/vcpkg': checked out '8eb57355a4ffb410a2e94c07b4dca2dffbee8e50' 2024-08-07T17:52:03.6540212Z Submodule path 'third_party/pocketfft': checked out '9d3ab05a7fffbc71a492bc6a17be034e83e8f0fe' 2024-08-07T17:52:04.0205631Z Submodule path 'third_party/protobuf': checked out 'd1eca4e4b421cd2997495c4b4e65cea6be4e9b8a' 2024-08-07T17:52:04.0232881Z Submodule 'third_party/benchmark' (https://github.com/google/benchmark.git) registered for path 'third_party/protobuf/third_party/benchmark' 2024-08-07T17:52:04.0236645Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/protobuf/third_party/googletest' 2024-08-07T17:52:04.0268990Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/benchmark'... 2024-08-07T17:52:04.6798580Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/protobuf/third_party/googletest'... 2024-08-07T17:52:05.9438759Z Submodule path 'third_party/protobuf/third_party/benchmark': checked out '5b7683f49e1e9223cf9927b24f6fd3d6bd82e3f8' 2024-08-07T17:52:06.0379448Z Submodule path 'third_party/protobuf/third_party/googletest': checked out '5ec7f0c4a113e2f18ac2c6cc7df51ad6afc24081' 2024-08-07T17:52:06.0493301Z Submodule path 'third_party/psimd': checked out '072586a71b55b7f8c584153d223e95687148a900' 2024-08-07T17:52:06.0651019Z Submodule path 'third_party/pthreadpool': checked out '4fe0e1e183925bf8cfa6aae24237e724a96479b8' 2024-08-07T17:52:06.1155845Z Submodule path 'third_party/pybind11': checked out '941f45bcb51457884fa1afd6e24a67377d70f75c' 2024-08-07T17:52:06.1532004Z Submodule path 'third_party/python-peachpy': checked out 'f45429b087dd7d5bc78bb40dc7cf06425c252d67' 2024-08-07T17:52:06.2070751Z Submodule path 'third_party/sleef': checked out '60e76d2bce17d278b439d9da17177c8f957a9e9b' 2024-08-07T17:52:06.2451519Z Submodule path 'third_party/tensorpipe': checked out '52791a2fd214b2a9dc5759d36725909c1daa7f2e' 2024-08-07T17:52:06.2471838Z Submodule 'third_party/googletest' (https://github.com/google/googletest.git) registered for path 'third_party/tensorpipe/third_party/googletest' 2024-08-07T17:52:06.2475299Z Submodule 'third_party/libnop' (https://github.com/google/libnop.git) registered for path 'third_party/tensorpipe/third_party/libnop' 2024-08-07T17:52:06.2479157Z Submodule 'third_party/libuv' (https://github.com/libuv/libuv.git) registered for path 'third_party/tensorpipe/third_party/libuv' 2024-08-07T17:52:06.2483150Z Submodule 'third_party/pybind11' (https://github.com/pybind/pybind11.git) registered for path 'third_party/tensorpipe/third_party/pybind11' 2024-08-07T17:52:06.2514761Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/googletest'... 2024-08-07T17:52:07.5324390Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libnop'... 2024-08-07T17:52:07.8329381Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/libuv'... 2024-08-07T17:52:09.2345458Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11'... 2024-08-07T17:52:10.4535070Z Submodule path 'third_party/tensorpipe/third_party/googletest': checked out 'aee0f9d9b5b87796ee8a0ab26b7587ec30e8858e' 2024-08-07T17:52:10.4752086Z Submodule path 'third_party/tensorpipe/third_party/libnop': checked out '910b55815be16109f04f4180e9adee14fb4ce281' 2024-08-07T17:52:10.5648835Z Submodule path 'third_party/tensorpipe/third_party/libuv': checked out '1dff88e5161cba5c59276d2070d2e304e4dcb242' 2024-08-07T17:52:10.6040252Z Submodule path 'third_party/tensorpipe/third_party/pybind11': checked out 'a23996fce38ff6ccfbcdc09f1e63f2c4be5ea2ef' 2024-08-07T17:52:10.6059492Z Submodule 'tools/clang' (https://github.com/wjakob/clang-cindex-python3) registered for path 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2024-08-07T17:52:10.6090436Z Cloning into '/home/ec2-user/actions-runner/_work/pytorch/pytorch/third_party/tensorpipe/third_party/pybind11/tools/clang'... 2024-08-07T17:52:10.8363127Z Submodule path 'third_party/tensorpipe/third_party/pybind11/tools/clang': checked out '6a00cbc4a9b8e68b71caf7f774b3f9c753ae84d5' 2024-08-07T17:52:10.8404394Z [command]/usr/bin/git submodule foreach --recursive git config --local gc.auto 0 2024-08-07T17:52:10.8765957Z Entering 'android/libs/fbjni' 2024-08-07T17:52:10.8815781Z Entering 'third_party/FP16' 2024-08-07T17:52:10.8865329Z Entering 'third_party/FXdiv' 2024-08-07T17:52:10.8914924Z Entering 'third_party/NNPACK' 2024-08-07T17:52:10.8964733Z Entering 'third_party/VulkanMemoryAllocator' 2024-08-07T17:52:10.9014226Z Entering 'third_party/XNNPACK' 2024-08-07T17:52:10.9085211Z Entering 'third_party/benchmark' 2024-08-07T17:52:10.9136087Z Entering 'third_party/cpp-httplib' 2024-08-07T17:52:10.9185487Z Entering 'third_party/cpuinfo' 2024-08-07T17:52:10.9236645Z Entering 'third_party/cudnn_frontend' 2024-08-07T17:52:10.9286437Z Entering 'third_party/cutlass' 2024-08-07T17:52:10.9346115Z Entering 'third_party/eigen' 2024-08-07T17:52:10.9399858Z Entering 'third_party/fbgemm' 2024-08-07T17:52:10.9451540Z Entering 'third_party/fbgemm/third_party/asmjit' 2024-08-07T17:52:10.9502228Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2024-08-07T17:52:10.9551471Z Entering 'third_party/fbgemm/third_party/cutlass' 2024-08-07T17:52:10.9608798Z Entering 'third_party/fbgemm/third_party/googletest' 2024-08-07T17:52:10.9657231Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2024-08-07T17:52:10.9710137Z Entering 'third_party/flatbuffers' 2024-08-07T17:52:10.9762453Z Entering 'third_party/fmt' 2024-08-07T17:52:10.9811890Z Entering 'third_party/foxi' 2024-08-07T17:52:10.9862841Z Entering 'third_party/gemmlowp/gemmlowp' 2024-08-07T17:52:10.9912571Z Entering 'third_party/gloo' 2024-08-07T17:52:10.9961842Z Entering 'third_party/googletest' 2024-08-07T17:52:11.0012859Z Entering 'third_party/ideep' 2024-08-07T17:52:11.0063013Z Entering 'third_party/ideep/mkl-dnn' 2024-08-07T17:52:11.0129743Z Entering 'third_party/ittapi' 2024-08-07T17:52:11.0178925Z Entering 'third_party/kineto' 2024-08-07T17:52:11.0231074Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2024-08-07T17:52:11.0280693Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2024-08-07T17:52:11.0333673Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2024-08-07T17:52:11.0384072Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2024-08-07T17:52:11.0437130Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2024-08-07T17:52:11.0486512Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2024-08-07T17:52:11.0542844Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2024-08-07T17:52:11.0593469Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2024-08-07T17:52:11.0645859Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2024-08-07T17:52:11.0699990Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2024-08-07T17:52:11.0752909Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2024-08-07T17:52:11.0804296Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2024-08-07T17:52:11.0856724Z Entering 'third_party/mimalloc' 2024-08-07T17:52:11.0907846Z Entering 'third_party/nccl/nccl' 2024-08-07T17:52:11.0958230Z Entering 'third_party/nlohmann' 2024-08-07T17:52:11.1010437Z Entering 'third_party/onnx' 2024-08-07T17:52:11.1079796Z Entering 'third_party/onnx/third_party/benchmark' 2024-08-07T17:52:11.1130201Z Entering 'third_party/onnx/third_party/pybind11' 2024-08-07T17:52:11.1186732Z Entering 'third_party/opentelemetry-cpp' 2024-08-07T17:52:11.1239349Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2024-08-07T17:52:11.1287800Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2024-08-07T17:52:11.1339442Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2024-08-07T17:52:11.1388868Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2024-08-07T17:52:11.1442474Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2024-08-07T17:52:11.1490369Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2024-08-07T17:52:11.1542773Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2024-08-07T17:52:11.1590171Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2024-08-07T17:52:11.1644620Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2024-08-07T17:52:11.1697113Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2024-08-07T17:52:11.1772082Z Entering 'third_party/pocketfft' 2024-08-07T17:52:11.1823362Z Entering 'third_party/protobuf' 2024-08-07T17:52:11.1880453Z Entering 'third_party/protobuf/third_party/benchmark' 2024-08-07T17:52:11.1930294Z Entering 'third_party/protobuf/third_party/googletest' 2024-08-07T17:52:11.1986695Z Entering 'third_party/psimd' 2024-08-07T17:52:11.2038070Z Entering 'third_party/pthreadpool' 2024-08-07T17:52:11.2086798Z Entering 'third_party/pybind11' 2024-08-07T17:52:11.2137383Z Entering 'third_party/python-peachpy' 2024-08-07T17:52:11.2186226Z Entering 'third_party/sleef' 2024-08-07T17:52:11.2236287Z Entering 'third_party/tensorpipe' 2024-08-07T17:52:11.2289034Z Entering 'third_party/tensorpipe/third_party/googletest' 2024-08-07T17:52:11.2340907Z Entering 'third_party/tensorpipe/third_party/libnop' 2024-08-07T17:52:11.2392365Z Entering 'third_party/tensorpipe/third_party/libuv' 2024-08-07T17:52:11.2444758Z Entering 'third_party/tensorpipe/third_party/pybind11' 2024-08-07T17:52:11.2492976Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2024-08-07T17:52:11.2567068Z ##[endgroup] 2024-08-07T17:52:11.2570595Z ##[group]Persisting credentials for submodules 2024-08-07T17:52:11.2575449Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'url\.https\:\/\/github\.com\/\.insteadOf' && git config --local --unset-all 'url.https://github.com/.insteadOf' || :" 2024-08-07T17:52:11.2938449Z Entering 'android/libs/fbjni' 2024-08-07T17:52:11.3005402Z Entering 'third_party/FP16' 2024-08-07T17:52:11.3069831Z Entering 'third_party/FXdiv' 2024-08-07T17:52:11.3140474Z Entering 'third_party/NNPACK' 2024-08-07T17:52:11.3216916Z Entering 'third_party/VulkanMemoryAllocator' 2024-08-07T17:52:11.3282535Z Entering 'third_party/XNNPACK' 2024-08-07T17:52:11.3367242Z Entering 'third_party/benchmark' 2024-08-07T17:52:11.3432858Z Entering 'third_party/cpp-httplib' 2024-08-07T17:52:11.3499963Z Entering 'third_party/cpuinfo' 2024-08-07T17:52:11.3564638Z Entering 'third_party/cudnn_frontend' 2024-08-07T17:52:11.3631124Z Entering 'third_party/cutlass' 2024-08-07T17:52:11.3705777Z Entering 'third_party/eigen' 2024-08-07T17:52:11.3772873Z Entering 'third_party/fbgemm' 2024-08-07T17:52:11.3837160Z Entering 'third_party/fbgemm/third_party/asmjit' 2024-08-07T17:52:11.3903157Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2024-08-07T17:52:11.3968362Z Entering 'third_party/fbgemm/third_party/cutlass' 2024-08-07T17:52:11.4041123Z Entering 'third_party/fbgemm/third_party/googletest' 2024-08-07T17:52:11.4114028Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2024-08-07T17:52:11.4180319Z Entering 'third_party/flatbuffers' 2024-08-07T17:52:11.4249585Z Entering 'third_party/fmt' 2024-08-07T17:52:11.4314142Z Entering 'third_party/foxi' 2024-08-07T17:52:11.4378107Z Entering 'third_party/gemmlowp/gemmlowp' 2024-08-07T17:52:11.4445513Z Entering 'third_party/gloo' 2024-08-07T17:52:11.4511303Z Entering 'third_party/googletest' 2024-08-07T17:52:11.4575929Z Entering 'third_party/ideep' 2024-08-07T17:52:11.4640164Z Entering 'third_party/ideep/mkl-dnn' 2024-08-07T17:52:11.4716599Z Entering 'third_party/ittapi' 2024-08-07T17:52:11.4782990Z Entering 'third_party/kineto' 2024-08-07T17:52:11.4851944Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2024-08-07T17:52:11.4917631Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2024-08-07T17:52:11.4989079Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2024-08-07T17:52:11.5064275Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2024-08-07T17:52:11.5131895Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2024-08-07T17:52:11.5205359Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2024-08-07T17:52:11.5272333Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2024-08-07T17:52:11.5348968Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2024-08-07T17:52:11.5415863Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2024-08-07T17:52:11.5486785Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2024-08-07T17:52:11.5559477Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2024-08-07T17:52:11.5635173Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2024-08-07T17:52:11.5704416Z Entering 'third_party/mimalloc' 2024-08-07T17:52:11.5771502Z Entering 'third_party/nccl/nccl' 2024-08-07T17:52:11.5840609Z Entering 'third_party/nlohmann' 2024-08-07T17:52:11.5909728Z Entering 'third_party/onnx' 2024-08-07T17:52:11.5993747Z Entering 'third_party/onnx/third_party/benchmark' 2024-08-07T17:52:11.6060663Z Entering 'third_party/onnx/third_party/pybind11' 2024-08-07T17:52:11.6131906Z Entering 'third_party/opentelemetry-cpp' 2024-08-07T17:52:11.6199690Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2024-08-07T17:52:11.6264578Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2024-08-07T17:52:11.6330966Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2024-08-07T17:52:11.6394856Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2024-08-07T17:52:11.6462252Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2024-08-07T17:52:11.6526493Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2024-08-07T17:52:11.6592680Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2024-08-07T17:52:11.6655688Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2024-08-07T17:52:11.6723474Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2024-08-07T17:52:11.6791897Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2024-08-07T17:52:11.6885220Z Entering 'third_party/pocketfft' 2024-08-07T17:52:11.6952666Z Entering 'third_party/protobuf' 2024-08-07T17:52:11.7028615Z Entering 'third_party/protobuf/third_party/benchmark' 2024-08-07T17:52:11.7094036Z Entering 'third_party/protobuf/third_party/googletest' 2024-08-07T17:52:11.7161395Z Entering 'third_party/psimd' 2024-08-07T17:52:11.7229173Z Entering 'third_party/pthreadpool' 2024-08-07T17:52:11.7296210Z Entering 'third_party/pybind11' 2024-08-07T17:52:11.7363508Z Entering 'third_party/python-peachpy' 2024-08-07T17:52:11.7442513Z Entering 'third_party/sleef' 2024-08-07T17:52:11.7508630Z Entering 'third_party/tensorpipe' 2024-08-07T17:52:11.7574533Z Entering 'third_party/tensorpipe/third_party/googletest' 2024-08-07T17:52:11.7640162Z Entering 'third_party/tensorpipe/third_party/libnop' 2024-08-07T17:52:11.7703780Z Entering 'third_party/tensorpipe/third_party/libuv' 2024-08-07T17:52:11.7767339Z Entering 'third_party/tensorpipe/third_party/pybind11' 2024-08-07T17:52:11.7829692Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2024-08-07T17:52:11.7915285Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local 'http.https://github.com/.extraheader' 'AUTHORIZATION: basic ***' && git config --local --show-origin --name-only --get-regexp remote.origin.url" 2024-08-07T17:52:11.8271358Z Entering 'android/libs/fbjni' 2024-08-07T17:52:11.8333418Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/android/libs/fbjni/config remote.origin.url 2024-08-07T17:52:11.8352652Z Entering 'third_party/FP16' 2024-08-07T17:52:11.8413972Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FP16/config remote.origin.url 2024-08-07T17:52:11.8432599Z Entering 'third_party/FXdiv' 2024-08-07T17:52:11.8493623Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/FXdiv/config remote.origin.url 2024-08-07T17:52:11.8514816Z Entering 'third_party/NNPACK' 2024-08-07T17:52:11.8573931Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK/config remote.origin.url 2024-08-07T17:52:11.8594306Z Entering 'third_party/VulkanMemoryAllocator' 2024-08-07T17:52:11.8656684Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/VulkanMemoryAllocator/config remote.origin.url 2024-08-07T17:52:11.8676123Z Entering 'third_party/XNNPACK' 2024-08-07T17:52:11.8736875Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/XNNPACK/config remote.origin.url 2024-08-07T17:52:11.8776494Z Entering 'third_party/benchmark' 2024-08-07T17:52:11.8838011Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/benchmark/config remote.origin.url 2024-08-07T17:52:11.8857293Z Entering 'third_party/cpp-httplib' 2024-08-07T17:52:11.8917736Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpp-httplib/config remote.origin.url 2024-08-07T17:52:11.8937483Z Entering 'third_party/cpuinfo' 2024-08-07T17:52:11.8997530Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cpuinfo/config remote.origin.url 2024-08-07T17:52:11.9017810Z Entering 'third_party/cudnn_frontend' 2024-08-07T17:52:11.9076326Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cudnn_frontend/config remote.origin.url 2024-08-07T17:52:11.9096128Z Entering 'third_party/cutlass' 2024-08-07T17:52:11.9156994Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/cutlass/config remote.origin.url 2024-08-07T17:52:11.9185575Z Entering 'third_party/eigen' 2024-08-07T17:52:11.9247763Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/eigen/config remote.origin.url 2024-08-07T17:52:11.9270072Z Entering 'third_party/fbgemm' 2024-08-07T17:52:11.9331873Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/config remote.origin.url 2024-08-07T17:52:11.9351810Z Entering 'third_party/fbgemm/third_party/asmjit' 2024-08-07T17:52:11.9414400Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/asmjit/config remote.origin.url 2024-08-07T17:52:11.9433770Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2024-08-07T17:52:11.9494221Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/cpuinfo/config remote.origin.url 2024-08-07T17:52:11.9513655Z Entering 'third_party/fbgemm/third_party/cutlass' 2024-08-07T17:52:11.9573934Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/cutlass/config remote.origin.url 2024-08-07T17:52:11.9601453Z Entering 'third_party/fbgemm/third_party/googletest' 2024-08-07T17:52:11.9662240Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/googletest/config remote.origin.url 2024-08-07T17:52:11.9681074Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2024-08-07T17:52:11.9744460Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fbgemm/modules/third_party/hipify_torch/config remote.origin.url 2024-08-07T17:52:11.9765377Z Entering 'third_party/flatbuffers' 2024-08-07T17:52:11.9826525Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/flatbuffers/config remote.origin.url 2024-08-07T17:52:11.9849931Z Entering 'third_party/fmt' 2024-08-07T17:52:11.9910950Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/fmt/config remote.origin.url 2024-08-07T17:52:11.9930212Z Entering 'third_party/foxi' 2024-08-07T17:52:11.9989645Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/foxi/config remote.origin.url 2024-08-07T17:52:12.0010561Z Entering 'third_party/gemmlowp/gemmlowp' 2024-08-07T17:52:12.0071107Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gemmlowp/gemmlowp/config remote.origin.url 2024-08-07T17:52:12.0091289Z Entering 'third_party/gloo' 2024-08-07T17:52:12.0153539Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/gloo/config remote.origin.url 2024-08-07T17:52:12.0172631Z Entering 'third_party/googletest' 2024-08-07T17:52:12.0233805Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/googletest/config remote.origin.url 2024-08-07T17:52:12.0253436Z Entering 'third_party/ideep' 2024-08-07T17:52:12.0315777Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/config remote.origin.url 2024-08-07T17:52:12.0333543Z Entering 'third_party/ideep/mkl-dnn' 2024-08-07T17:52:12.0392321Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ideep/modules/mkl-dnn/config remote.origin.url 2024-08-07T17:52:12.0419661Z Entering 'third_party/ittapi' 2024-08-07T17:52:12.0478303Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/ittapi/config remote.origin.url 2024-08-07T17:52:12.0498691Z Entering 'third_party/kineto' 2024-08-07T17:52:12.0560292Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/config remote.origin.url 2024-08-07T17:52:12.0579271Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2024-08-07T17:52:12.0641641Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/config remote.origin.url 2024-08-07T17:52:12.0659880Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2024-08-07T17:52:12.0722774Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/DCGM/config remote.origin.url 2024-08-07T17:52:12.0743583Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2024-08-07T17:52:12.0805979Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/cpr/config remote.origin.url 2024-08-07T17:52:12.0825189Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2024-08-07T17:52:12.0888886Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/fmt/config remote.origin.url 2024-08-07T17:52:12.0909179Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2024-08-07T17:52:12.0971706Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/config remote.origin.url 2024-08-07T17:52:12.0990049Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2024-08-07T17:52:12.1053979Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/gflags/modules/doc/config remote.origin.url 2024-08-07T17:52:12.1075884Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2024-08-07T17:52:12.1138398Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/glog/config remote.origin.url 2024-08-07T17:52:12.1157656Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2024-08-07T17:52:12.1218382Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/googletest/config remote.origin.url 2024-08-07T17:52:12.1238163Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2024-08-07T17:52:12.1299784Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/json/config remote.origin.url 2024-08-07T17:52:12.1319931Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2024-08-07T17:52:12.1382002Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/dynolog/modules/third_party/pfs/config remote.origin.url 2024-08-07T17:52:12.1404676Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2024-08-07T17:52:12.1464905Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/fmt/config remote.origin.url 2024-08-07T17:52:12.1484111Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2024-08-07T17:52:12.1544833Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/kineto/modules/libkineto/third_party/googletest/config remote.origin.url 2024-08-07T17:52:12.1566730Z Entering 'third_party/mimalloc' 2024-08-07T17:52:12.1627751Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/mimalloc/config remote.origin.url 2024-08-07T17:52:12.1647676Z Entering 'third_party/nccl/nccl' 2024-08-07T17:52:12.1709211Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nccl/nccl/config remote.origin.url 2024-08-07T17:52:12.1728724Z Entering 'third_party/nlohmann' 2024-08-07T17:52:12.1787850Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/nlohmann/config remote.origin.url 2024-08-07T17:52:12.1809562Z Entering 'third_party/onnx' 2024-08-07T17:52:12.1868774Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/config remote.origin.url 2024-08-07T17:52:12.1906483Z Entering 'third_party/onnx/third_party/benchmark' 2024-08-07T17:52:12.1965894Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/benchmark/config remote.origin.url 2024-08-07T17:52:12.1985487Z Entering 'third_party/onnx/third_party/pybind11' 2024-08-07T17:52:12.2047919Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/onnx/modules/third_party/pybind11/config remote.origin.url 2024-08-07T17:52:12.2071028Z Entering 'third_party/opentelemetry-cpp' 2024-08-07T17:52:12.2134441Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/config remote.origin.url 2024-08-07T17:52:12.2155189Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2024-08-07T17:52:12.2217620Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/benchmark/config remote.origin.url 2024-08-07T17:52:12.2236573Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2024-08-07T17:52:12.2297721Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/googletest/config remote.origin.url 2024-08-07T17:52:12.2316478Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2024-08-07T17:52:12.2376750Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/ms-gsl/config remote.origin.url 2024-08-07T17:52:12.2395394Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2024-08-07T17:52:12.2457246Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/nlohmann-json/config remote.origin.url 2024-08-07T17:52:12.2478282Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2024-08-07T17:52:12.2541177Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentelemetry-proto/config remote.origin.url 2024-08-07T17:52:12.2559939Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2024-08-07T17:52:12.2623136Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/opentracing-cpp/config remote.origin.url 2024-08-07T17:52:12.2641917Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2024-08-07T17:52:12.2703921Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/config remote.origin.url 2024-08-07T17:52:12.2722093Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2024-08-07T17:52:12.2782786Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/civetweb/config remote.origin.url 2024-08-07T17:52:12.2804783Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2024-08-07T17:52:12.2867069Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/third_party/prometheus-cpp/modules/googletest/config remote.origin.url 2024-08-07T17:52:12.2888288Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2024-08-07T17:52:12.2948959Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/opentelemetry-cpp/modules/tools/vcpkg/config remote.origin.url 2024-08-07T17:52:12.2992652Z Entering 'third_party/pocketfft' 2024-08-07T17:52:12.3054789Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pocketfft/config remote.origin.url 2024-08-07T17:52:12.3073944Z Entering 'third_party/protobuf' 2024-08-07T17:52:12.3135892Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/config remote.origin.url 2024-08-07T17:52:12.3158532Z Entering 'third_party/protobuf/third_party/benchmark' 2024-08-07T17:52:12.3220904Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/benchmark/config remote.origin.url 2024-08-07T17:52:12.3239365Z Entering 'third_party/protobuf/third_party/googletest' 2024-08-07T17:52:12.3301091Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/protobuf/modules/third_party/googletest/config remote.origin.url 2024-08-07T17:52:12.3322474Z Entering 'third_party/psimd' 2024-08-07T17:52:12.3383022Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/psimd/config remote.origin.url 2024-08-07T17:52:12.3403623Z Entering 'third_party/pthreadpool' 2024-08-07T17:52:12.3463308Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/NNPACK_deps/pthreadpool/config remote.origin.url 2024-08-07T17:52:12.3482938Z Entering 'third_party/pybind11' 2024-08-07T17:52:12.3544919Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/pybind11/config remote.origin.url 2024-08-07T17:52:12.3564303Z Entering 'third_party/python-peachpy' 2024-08-07T17:52:12.3625298Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/python-peachpy/config remote.origin.url 2024-08-07T17:52:12.3644730Z Entering 'third_party/sleef' 2024-08-07T17:52:12.3704478Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/sleef/config remote.origin.url 2024-08-07T17:52:12.3724223Z Entering 'third_party/tensorpipe' 2024-08-07T17:52:12.3785279Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/config remote.origin.url 2024-08-07T17:52:12.3804882Z Entering 'third_party/tensorpipe/third_party/googletest' 2024-08-07T17:52:12.3865497Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/googletest/config remote.origin.url 2024-08-07T17:52:12.3885268Z Entering 'third_party/tensorpipe/third_party/libnop' 2024-08-07T17:52:12.3945935Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libnop/config remote.origin.url 2024-08-07T17:52:12.3965421Z Entering 'third_party/tensorpipe/third_party/libuv' 2024-08-07T17:52:12.4026511Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/libuv/config remote.origin.url 2024-08-07T17:52:12.4046113Z Entering 'third_party/tensorpipe/third_party/pybind11' 2024-08-07T17:52:12.4109507Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/config remote.origin.url 2024-08-07T17:52:12.4127458Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2024-08-07T17:52:12.4188430Z file:/home/ec2-user/actions-runner/_work/pytorch/pytorch/.git/modules/third_party/tensorpipe/modules/third_party/pybind11/modules/tools/clang/config remote.origin.url 2024-08-07T17:52:12.5406728Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'git@github.com:' 2024-08-07T17:52:12.5772754Z Entering 'android/libs/fbjni' 2024-08-07T17:52:12.5823897Z Entering 'third_party/FP16' 2024-08-07T17:52:12.5875891Z Entering 'third_party/FXdiv' 2024-08-07T17:52:12.5927249Z Entering 'third_party/NNPACK' 2024-08-07T17:52:12.5976679Z Entering 'third_party/VulkanMemoryAllocator' 2024-08-07T17:52:12.6027234Z Entering 'third_party/XNNPACK' 2024-08-07T17:52:12.6095973Z Entering 'third_party/benchmark' 2024-08-07T17:52:12.6146147Z Entering 'third_party/cpp-httplib' 2024-08-07T17:52:12.6198067Z Entering 'third_party/cpuinfo' 2024-08-07T17:52:12.6248245Z Entering 'third_party/cudnn_frontend' 2024-08-07T17:52:12.6300028Z Entering 'third_party/cutlass' 2024-08-07T17:52:12.6359640Z Entering 'third_party/eigen' 2024-08-07T17:52:12.6413184Z Entering 'third_party/fbgemm' 2024-08-07T17:52:12.6463367Z Entering 'third_party/fbgemm/third_party/asmjit' 2024-08-07T17:52:12.6512467Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2024-08-07T17:52:12.6561961Z Entering 'third_party/fbgemm/third_party/cutlass' 2024-08-07T17:52:12.6619819Z Entering 'third_party/fbgemm/third_party/googletest' 2024-08-07T17:52:12.6669340Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2024-08-07T17:52:12.6720961Z Entering 'third_party/flatbuffers' 2024-08-07T17:52:12.6773901Z Entering 'third_party/fmt' 2024-08-07T17:52:12.6824056Z Entering 'third_party/foxi' 2024-08-07T17:52:12.6873394Z Entering 'third_party/gemmlowp/gemmlowp' 2024-08-07T17:52:12.6923255Z Entering 'third_party/gloo' 2024-08-07T17:52:12.6972509Z Entering 'third_party/googletest' 2024-08-07T17:52:12.7023812Z Entering 'third_party/ideep' 2024-08-07T17:52:12.7072214Z Entering 'third_party/ideep/mkl-dnn' 2024-08-07T17:52:12.7130414Z Entering 'third_party/ittapi' 2024-08-07T17:52:12.7181038Z Entering 'third_party/kineto' 2024-08-07T17:52:12.7231572Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2024-08-07T17:52:12.7281484Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2024-08-07T17:52:12.7333627Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2024-08-07T17:52:12.7383645Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2024-08-07T17:52:12.7434321Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2024-08-07T17:52:12.7485029Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2024-08-07T17:52:12.7537798Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2024-08-07T17:52:12.7587760Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2024-08-07T17:52:12.7638022Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2024-08-07T17:52:12.7689072Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2024-08-07T17:52:12.7742016Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2024-08-07T17:52:12.7793231Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2024-08-07T17:52:12.7844652Z Entering 'third_party/mimalloc' 2024-08-07T17:52:12.7896893Z Entering 'third_party/nccl/nccl' 2024-08-07T17:52:12.7948788Z Entering 'third_party/nlohmann' 2024-08-07T17:52:12.8001492Z Entering 'third_party/onnx' 2024-08-07T17:52:12.8072716Z Entering 'third_party/onnx/third_party/benchmark' 2024-08-07T17:52:12.8124086Z Entering 'third_party/onnx/third_party/pybind11' 2024-08-07T17:52:12.8177003Z Entering 'third_party/opentelemetry-cpp' 2024-08-07T17:52:12.8230282Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2024-08-07T17:52:12.8279493Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2024-08-07T17:52:12.8329495Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2024-08-07T17:52:12.8377795Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2024-08-07T17:52:12.8429878Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2024-08-07T17:52:12.8478386Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2024-08-07T17:52:12.8527691Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2024-08-07T17:52:12.8577235Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2024-08-07T17:52:12.8630309Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2024-08-07T17:52:12.8682436Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2024-08-07T17:52:12.8756683Z Entering 'third_party/pocketfft' 2024-08-07T17:52:12.8807879Z Entering 'third_party/protobuf' 2024-08-07T17:52:12.8861895Z Entering 'third_party/protobuf/third_party/benchmark' 2024-08-07T17:52:12.8911276Z Entering 'third_party/protobuf/third_party/googletest' 2024-08-07T17:52:12.8965148Z Entering 'third_party/psimd' 2024-08-07T17:52:12.9017573Z Entering 'third_party/pthreadpool' 2024-08-07T17:52:12.9069073Z Entering 'third_party/pybind11' 2024-08-07T17:52:12.9121504Z Entering 'third_party/python-peachpy' 2024-08-07T17:52:12.9172759Z Entering 'third_party/sleef' 2024-08-07T17:52:12.9225572Z Entering 'third_party/tensorpipe' 2024-08-07T17:52:12.9276567Z Entering 'third_party/tensorpipe/third_party/googletest' 2024-08-07T17:52:12.9329285Z Entering 'third_party/tensorpipe/third_party/libnop' 2024-08-07T17:52:12.9379350Z Entering 'third_party/tensorpipe/third_party/libuv' 2024-08-07T17:52:12.9430778Z Entering 'third_party/tensorpipe/third_party/pybind11' 2024-08-07T17:52:12.9479181Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2024-08-07T17:52:12.9549953Z [command]/usr/bin/git submodule foreach --recursive git config --local --add 'url.https://github.com/.insteadOf' 'org-21003710@github.com:' 2024-08-07T17:52:12.9908832Z Entering 'android/libs/fbjni' 2024-08-07T17:52:12.9958642Z Entering 'third_party/FP16' 2024-08-07T17:52:13.0009623Z Entering 'third_party/FXdiv' 2024-08-07T17:52:13.0059558Z Entering 'third_party/NNPACK' 2024-08-07T17:52:13.0111289Z Entering 'third_party/VulkanMemoryAllocator' 2024-08-07T17:52:13.0162291Z Entering 'third_party/XNNPACK' 2024-08-07T17:52:13.0236225Z Entering 'third_party/benchmark' 2024-08-07T17:52:13.0287471Z Entering 'third_party/cpp-httplib' 2024-08-07T17:52:13.0338336Z Entering 'third_party/cpuinfo' 2024-08-07T17:52:13.0389339Z Entering 'third_party/cudnn_frontend' 2024-08-07T17:52:13.0440480Z Entering 'third_party/cutlass' 2024-08-07T17:52:13.0500583Z Entering 'third_party/eigen' 2024-08-07T17:52:13.0553510Z Entering 'third_party/fbgemm' 2024-08-07T17:52:13.0605723Z Entering 'third_party/fbgemm/third_party/asmjit' 2024-08-07T17:52:13.0653722Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2024-08-07T17:52:13.0703776Z Entering 'third_party/fbgemm/third_party/cutlass' 2024-08-07T17:52:13.0761528Z Entering 'third_party/fbgemm/third_party/googletest' 2024-08-07T17:52:13.0811399Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2024-08-07T17:52:13.0862917Z Entering 'third_party/flatbuffers' 2024-08-07T17:52:13.0917155Z Entering 'third_party/fmt' 2024-08-07T17:52:13.0967995Z Entering 'third_party/foxi' 2024-08-07T17:52:13.1019694Z Entering 'third_party/gemmlowp/gemmlowp' 2024-08-07T17:52:13.1070050Z Entering 'third_party/gloo' 2024-08-07T17:52:13.1120333Z Entering 'third_party/googletest' 2024-08-07T17:52:13.1170793Z Entering 'third_party/ideep' 2024-08-07T17:52:13.1220243Z Entering 'third_party/ideep/mkl-dnn' 2024-08-07T17:52:13.1278088Z Entering 'third_party/ittapi' 2024-08-07T17:52:13.1328621Z Entering 'third_party/kineto' 2024-08-07T17:52:13.1378576Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2024-08-07T17:52:13.1428821Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2024-08-07T17:52:13.1481565Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2024-08-07T17:52:13.1531747Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2024-08-07T17:52:13.1582417Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2024-08-07T17:52:13.1633139Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2024-08-07T17:52:13.1684571Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2024-08-07T17:52:13.1735777Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2024-08-07T17:52:13.1784743Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2024-08-07T17:52:13.1836779Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2024-08-07T17:52:13.1888084Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2024-08-07T17:52:13.1939565Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2024-08-07T17:52:13.1989632Z Entering 'third_party/mimalloc' 2024-08-07T17:52:13.2041740Z Entering 'third_party/nccl/nccl' 2024-08-07T17:52:13.2091004Z Entering 'third_party/nlohmann' 2024-08-07T17:52:13.2142587Z Entering 'third_party/onnx' 2024-08-07T17:52:13.2211559Z Entering 'third_party/onnx/third_party/benchmark' 2024-08-07T17:52:13.2260901Z Entering 'third_party/onnx/third_party/pybind11' 2024-08-07T17:52:13.2315574Z Entering 'third_party/opentelemetry-cpp' 2024-08-07T17:52:13.2367658Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2024-08-07T17:52:13.2419907Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2024-08-07T17:52:13.2469812Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2024-08-07T17:52:13.2519987Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2024-08-07T17:52:13.2571204Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2024-08-07T17:52:13.2620074Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2024-08-07T17:52:13.2668796Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2024-08-07T17:52:13.2717744Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2024-08-07T17:52:13.2770408Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2024-08-07T17:52:13.2821786Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2024-08-07T17:52:13.2896288Z Entering 'third_party/pocketfft' 2024-08-07T17:52:13.2946698Z Entering 'third_party/protobuf' 2024-08-07T17:52:13.3002103Z Entering 'third_party/protobuf/third_party/benchmark' 2024-08-07T17:52:13.3050203Z Entering 'third_party/protobuf/third_party/googletest' 2024-08-07T17:52:13.3102212Z Entering 'third_party/psimd' 2024-08-07T17:52:13.3151133Z Entering 'third_party/pthreadpool' 2024-08-07T17:52:13.3201914Z Entering 'third_party/pybind11' 2024-08-07T17:52:13.3251580Z Entering 'third_party/python-peachpy' 2024-08-07T17:52:13.3302116Z Entering 'third_party/sleef' 2024-08-07T17:52:13.3351334Z Entering 'third_party/tensorpipe' 2024-08-07T17:52:13.3403541Z Entering 'third_party/tensorpipe/third_party/googletest' 2024-08-07T17:52:13.3452658Z Entering 'third_party/tensorpipe/third_party/libnop' 2024-08-07T17:52:13.3501897Z Entering 'third_party/tensorpipe/third_party/libuv' 2024-08-07T17:52:13.3551702Z Entering 'third_party/tensorpipe/third_party/pybind11' 2024-08-07T17:52:13.3600898Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2024-08-07T17:52:13.3664972Z ##[endgroup] 2024-08-07T17:52:13.3714679Z [command]/usr/bin/git log -1 --format='%H' 2024-08-07T17:52:13.3745949Z '016588f53c6904b840aa56aa86f95460b4d9c996' 2024-08-07T17:52:13.3941582Z Prepare all required actions 2024-08-07T17:52:13.3942291Z Getting action download info 2024-08-07T17:52:13.5362631Z ##[group]Run ./.github/actions/setup-linux 2024-08-07T17:52:13.5363077Z env: 2024-08-07T17:52:13.5363379Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:13.5363724Z ##[endgroup] 2024-08-07T17:52:13.5425774Z ##[group]Run set -euo pipefail 2024-08-07T17:52:13.5426241Z set -euo pipefail 2024-08-07T17:52:13.5426625Z function get_ec2_metadata() { 2024-08-07T17:52:13.5427143Z  # Pulled from instance metadata endpoint for EC2 2024-08-07T17:52:13.5427978Z  # see https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html 2024-08-07T17:52:13.5428705Z  category=$1 2024-08-07T17:52:13.5429239Z  # If it is GCP runner (runner name contains gcp), do not run this 2024-08-07T17:52:13.5429840Z  runner_name_str=i-07832b6703dca2070 2024-08-07T17:52:13.5430354Z  if [[ -f /.inarc ]]; then 2024-08-07T17:52:13.5430843Z  echo "ARC Runner, no info on ec2 metadata" 2024-08-07T17:52:13.5431364Z  elif [[ $runner_name_str == *"gcp"* ]]; then 2024-08-07T17:52:13.5432000Z  echo "Runner is from Google Cloud Platform, No info on ec2 metadata" 2024-08-07T17:52:13.5432653Z  else 2024-08-07T17:52:13.5433117Z  curl -fsSL "http://169.254.169.254/latest/meta-data/${category}" 2024-08-07T17:52:13.5433676Z  fi 2024-08-07T17:52:13.5433979Z } 2024-08-07T17:52:13.5434330Z echo "ami-id: $(get_ec2_metadata ami-id)" 2024-08-07T17:52:13.5434911Z echo "instance-id: $(get_ec2_metadata instance-id)" 2024-08-07T17:52:13.5435556Z echo "instance-type: $(get_ec2_metadata instance-type)" 2024-08-07T17:52:13.5436118Z echo "system info $(uname -a)" 2024-08-07T17:52:13.5445930Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:52:13.5446436Z env: 2024-08-07T17:52:13.5446725Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:13.5447089Z ##[endgroup] 2024-08-07T17:52:13.5558131Z ami-id: ami-06c68f701d8090592 2024-08-07T17:52:13.5623013Z instance-id: i-07832b6703dca2070 2024-08-07T17:52:13.5682012Z instance-type: g3.4xlarge 2024-08-07T17:52:13.5695974Z system info Linux ip-10-0-62-73.ec2.internal 6.1.94-99.176.amzn2023.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jun 18 14:57:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux 2024-08-07T17:52:13.5752027Z ##[group]Run echo "IN_ARC_RUNNER=$([ -f /.inarc ] && echo true || echo false)" >> $GITHUB_OUTPUT 2024-08-07T17:52:13.5752894Z echo "IN_ARC_RUNNER=$([ -f /.inarc ] && echo true || echo false)" >> $GITHUB_OUTPUT 2024-08-07T17:52:13.5760293Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:52:13.5760796Z env: 2024-08-07T17:52:13.5761085Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:13.5761444Z ##[endgroup] 2024-08-07T17:52:13.5854843Z ##[group]Run if systemctl is-active --quiet docker; then 2024-08-07T17:52:13.5855469Z if systemctl is-active --quiet docker; then 2024-08-07T17:52:13.5855978Z  echo "Docker daemon is running..."; 2024-08-07T17:52:13.5856433Z else 2024-08-07T17:52:13.5856919Z  echo "Starting docker deamon..." && sudo systemctl start docker; 2024-08-07T17:52:13.5857469Z fi 2024-08-07T17:52:13.5863999Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:52:13.5864482Z env: 2024-08-07T17:52:13.5864754Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:13.5865099Z ##[endgroup] 2024-08-07T17:52:13.5954472Z Docker daemon is running... 2024-08-07T17:52:13.6014116Z ##[group]Run nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482 2024-08-07T17:52:13.6014670Z with: 2024-08-07T17:52:13.6015189Z shell: bash 2024-08-07T17:52:13.6015504Z timeout_minutes: 5 2024-08-07T17:52:13.6015825Z max_attempts: 3 2024-08-07T17:52:13.6016153Z retry_wait_seconds: 30 2024-08-07T17:52:13.6017729Z command: AWS_ACCOUNT_ID=$(aws sts get-caller-identity|grep Account|cut -f4 -d\") aws ecr get-login-password --region "$AWS_DEFAULT_REGION" | docker login --username AWS \ --password-stdin "$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com" 2024-08-07T17:52:13.6019162Z polling_interval_seconds: 1 2024-08-07T17:52:13.6019594Z warning_on_retry: true 2024-08-07T17:52:13.6019948Z continue_on_error: false 2024-08-07T17:52:13.6020284Z env: 2024-08-07T17:52:13.6020550Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:13.6020895Z AWS_RETRY_MODE: standard 2024-08-07T17:52:13.6021235Z AWS_MAX_ATTEMPTS: 5 2024-08-07T17:52:13.6021556Z AWS_DEFAULT_REGION: us-east-1 2024-08-07T17:52:13.6021914Z ##[endgroup] 2024-08-07T17:52:15.1273428Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2024-08-07T17:52:15.1274251Z Configure a credential helper to remove this warning. See 2024-08-07T17:52:15.1275393Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2024-08-07T17:52:15.1275875Z 2024-08-07T17:52:15.1276034Z Login Succeeded 2024-08-07T17:52:15.6805494Z Command completed after 1 attempt(s). 2024-08-07T17:52:15.6942636Z ##[group]Run env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2024-08-07T17:52:15.6943353Z env | grep '^GITHUB' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2024-08-07T17:52:15.6943970Z env | grep '^CI' >> "/tmp/github_env_${GITHUB_RUN_ID}" 2024-08-07T17:52:15.6951972Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:52:15.6952486Z env: 2024-08-07T17:52:15.6952764Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:15.6953133Z ##[endgroup] 2024-08-07T17:52:15.7081170Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2024-08-07T17:52:15.7081940Z # ignore expansion of "docker ps -q" since it could be empty 2024-08-07T17:52:15.7082528Z # shellcheck disable=SC2046 2024-08-07T17:52:15.7083000Z docker stop $(docker ps -q) || true 2024-08-07T17:52:15.7083500Z # Prune all of the docker images 2024-08-07T17:52:15.7083937Z docker system prune -af 2024-08-07T17:52:15.7090857Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:52:15.7091344Z env: 2024-08-07T17:52:15.7091642Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:15.7091974Z ##[endgroup] 2024-08-07T17:52:15.7444715Z "docker stop" requires at least 1 argument. 2024-08-07T17:52:15.7445226Z See 'docker stop --help'. 2024-08-07T17:52:15.7445455Z 2024-08-07T17:52:15.7445675Z Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...] 2024-08-07T17:52:15.7446032Z 2024-08-07T17:52:15.7446185Z Stop one or more running containers 2024-08-07T17:52:15.7639768Z Total reclaimed space: 0B 2024-08-07T17:52:15.7683719Z ##[group]Run set +e 2024-08-07T17:52:15.7684157Z set +e 2024-08-07T17:52:15.7684477Z set -x 2024-08-07T17:52:15.7684770Z  2024-08-07T17:52:15.7685101Z PT_DOMAIN=download.pytorch.org 2024-08-07T17:52:15.7685939Z # TODO: Flaky access to download.pytorch.org https://github.com/pytorch/pytorch/issues/100400, 2024-08-07T17:52:15.7686918Z # cleaning this up once the issue is fixed. There are more than one resolved IP here, the last 2024-08-07T17:52:15.7687625Z # one is returned at random 2024-08-07T17:52:15.7688226Z RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" | tail -n1) 2024-08-07T17:52:15.7688721Z  2024-08-07T17:52:15.7689046Z if [ -z "${RESOLVED_IP}" ]; then 2024-08-07T17:52:15.7689634Z  echo "Couldn't resolve ${PT_DOMAIN}, retrying with Google DNS..." 2024-08-07T17:52:15.7690327Z  RESOLVED_IP=$(dig -4 +short "${PT_DOMAIN}" @8.8.8.8 | tail -n1) 2024-08-07T17:52:15.7690870Z  2024-08-07T17:52:15.7691201Z  if [ -z "${RESOLVED_IP}" ]; then 2024-08-07T17:52:15.7691956Z  echo "Couldn't resolve ${PT_DOMAIN}, exiting..." 2024-08-07T17:52:15.7692439Z  exit 1 2024-08-07T17:52:15.7692770Z  fi 2024-08-07T17:52:15.7693230Z fi 2024-08-07T17:52:15.7693546Z  2024-08-07T17:52:15.7693905Z if grep -r "${PT_DOMAIN}" /etc/hosts; then 2024-08-07T17:52:15.7694395Z  # Clean up any old records first 2024-08-07T17:52:15.7694895Z  sudo sed -i "/${PT_DOMAIN}/d" /etc/hosts 2024-08-07T17:52:15.7695843Z fi 2024-08-07T17:52:15.7696121Z  2024-08-07T17:52:15.7696532Z echo "${RESOLVED_IP} ${PT_DOMAIN}" | sudo tee -a /etc/hosts 2024-08-07T17:52:15.7697041Z cat /etc/hosts 2024-08-07T17:52:15.7704183Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:52:15.7704681Z env: 2024-08-07T17:52:15.7704975Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:15.7705307Z ##[endgroup] 2024-08-07T17:52:15.7734222Z + PT_DOMAIN=download.pytorch.org 2024-08-07T17:52:15.7740644Z ++ dig -4 +short download.pytorch.org 2024-08-07T17:52:15.7741567Z ++ tail -n1 2024-08-07T17:52:15.8268179Z + RESOLVED_IP=18.160.10.22 2024-08-07T17:52:15.8268634Z + '[' -z 18.160.10.22 ']' 2024-08-07T17:52:15.8269019Z + grep -r download.pytorch.org /etc/hosts 2024-08-07T17:52:15.8283252Z 18.165.98.47 download.pytorch.org 2024-08-07T17:52:15.8285138Z + sudo sed -i /download.pytorch.org/d /etc/hosts 2024-08-07T17:52:15.9810924Z + echo '18.160.10.22 download.pytorch.org' 2024-08-07T17:52:15.9811447Z + sudo tee -a /etc/hosts 2024-08-07T17:52:16.0364926Z 18.160.10.22 download.pytorch.org 2024-08-07T17:52:16.0385356Z + cat /etc/hosts 2024-08-07T17:52:16.0395974Z 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 2024-08-07T17:52:16.0405707Z ::1 localhost6 localhost6.localdomain6 2024-08-07T17:52:16.0406626Z 18.160.10.22 download.pytorch.org 2024-08-07T17:52:16.0567823Z ##[group]Run pytorch/test-infra/.github/actions/calculate-docker-image@main 2024-08-07T17:52:16.0568512Z with: 2024-08-07T17:52:16.0569471Z docker-image-name: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:16.0570533Z docker-build-dir: .ci/docker 2024-08-07T17:52:16.0570927Z working-directory: . 2024-08-07T17:52:16.0571407Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:16.0571930Z force-push: false 2024-08-07T17:52:16.0572251Z env: 2024-08-07T17:52:16.0572545Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:16.0572883Z ##[endgroup] 2024-08-07T17:52:16.0605969Z ##[group]Run set -ex 2024-08-07T17:52:16.0606382Z set -ex 2024-08-07T17:52:16.0606700Z  2024-08-07T17:52:16.0607230Z # If the docker build directory or the build script doesn't exist, the action will 2024-08-07T17:52:16.0608157Z # gracefully return the docker image name as it is. Pulling docker image in Linux 2024-08-07T17:52:16.0608930Z # job could then download the pre-built image as usual 2024-08-07T17:52:16.0609619Z if [[ ! -d "${DOCKER_BUILD_DIR}" ]] || [[ ! -f "${DOCKER_BUILD_DIR}/build.sh" ]]; then 2024-08-07T17:52:16.0610259Z  echo "skip=true" >> "${GITHUB_OUTPUT}" 2024-08-07T17:52:16.0610860Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2024-08-07T17:52:16.0611397Z  2024-08-07T17:52:16.0611886Z  echo "There is no Docker build script in ${REPO_NAME} repo, skipping..." 2024-08-07T17:52:16.0612482Z  exit 0 2024-08-07T17:52:16.0612777Z else 2024-08-07T17:52:16.0613145Z  echo "skip=false" >> "${GITHUB_OUTPUT}" 2024-08-07T17:52:16.0613595Z fi 2024-08-07T17:52:16.0613870Z  2024-08-07T17:52:16.0614331Z if [[ "${DOCKER_IMAGE_NAME}" == *"${DOCKER_REGISTRY}/${REPO_NAME}"* ]]; then 2024-08-07T17:52:16.0615138Z  # The docker image name already includes the ECR prefix and tag, so we can just 2024-08-07T17:52:16.0616113Z  # use it as it is, but first let's extract the tag 2024-08-07T17:52:16.0616766Z  DOCKER_TAG=$(echo "${DOCKER_IMAGE_NAME}" | awk -F '[:,]' '{print $2}') 2024-08-07T17:52:16.0617453Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2024-08-07T17:52:16.0618108Z  echo "docker-image=${DOCKER_IMAGE_NAME}" >> "${GITHUB_OUTPUT}" 2024-08-07T17:52:16.0618671Z else 2024-08-07T17:52:16.0619108Z  DOCKER_TAG=$(git rev-parse HEAD:"${DOCKER_BUILD_DIR}") 2024-08-07T17:52:16.0619864Z  echo "docker-tag=${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2024-08-07T17:52:16.0620676Z  echo "docker-image=${DOCKER_REGISTRY}/${REPO_NAME}/${DOCKER_IMAGE_NAME}:${DOCKER_TAG}" >> "${GITHUB_OUTPUT}" 2024-08-07T17:52:16.0621368Z fi 2024-08-07T17:52:16.0629421Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:52:16.0629885Z env: 2024-08-07T17:52:16.0630185Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:16.0630524Z REPO_NAME: pytorch 2024-08-07T17:52:16.0631453Z DOCKER_IMAGE_NAME: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:16.0632449Z DOCKER_BUILD_DIR: .ci/docker 2024-08-07T17:52:16.0632928Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:16.0633402Z ##[endgroup] 2024-08-07T17:52:16.0663330Z + [[ ! -d .ci/docker ]] 2024-08-07T17:52:16.0663696Z + [[ ! -f .ci/docker/build.sh ]] 2024-08-07T17:52:16.0664070Z + echo skip=false 2024-08-07T17:52:16.0665529Z + [[ 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 == *\3\0\8\5\3\5\3\8\5\1\1\4\.\d\k\r\.\e\c\r\.\u\s\-\e\a\s\t\-\1\.\a\m\a\z\o\n\a\w\s\.\c\o\m\/\p\y\t\o\r\c\h* ]] 2024-08-07T17:52:16.0672312Z ++ echo 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:16.0673305Z ++ awk -F '[:,]' '{print $2}' 2024-08-07T17:52:16.0700542Z + DOCKER_TAG=02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:16.0701127Z + echo docker-tag=02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:16.0702181Z + echo docker-image=308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:16.0744654Z ##[group]Run set +e 2024-08-07T17:52:16.0745093Z set +e 2024-08-07T17:52:16.0745396Z set -x 2024-08-07T17:52:16.0745712Z  2024-08-07T17:52:16.0746006Z login() { 2024-08-07T17:52:16.0746619Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2024-08-07T17:52:16.0747313Z } 2024-08-07T17:52:16.0747602Z  2024-08-07T17:52:16.0747886Z retry () { 2024-08-07T17:52:16.0748272Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2024-08-07T17:52:16.0748705Z } 2024-08-07T17:52:16.0748979Z  2024-08-07T17:52:16.0749305Z retry login "${DOCKER_REGISTRY}" 2024-08-07T17:52:16.0749706Z  2024-08-07T17:52:16.0750163Z # Check if image already exists, if it does then skip building it 2024-08-07T17:52:16.0750837Z if docker manifest inspect "${DOCKER_IMAGE}"; then 2024-08-07T17:52:16.0751332Z  exit 0 2024-08-07T17:52:16.0751628Z fi 2024-08-07T17:52:16.0751923Z  2024-08-07T17:52:16.0752389Z # NB: This part requires a full checkout. Otherwise, the merge base will 2024-08-07T17:52:16.0753169Z # be empty. The default action would be to continue rebuild the image 2024-08-07T17:52:16.0753868Z if [[ "$BASE_REVISION" = "$(git rev-parse HEAD)" ]]; then 2024-08-07T17:52:16.0754495Z  # if we're on the base branch then use the parent commit 2024-08-07T17:52:16.0755222Z  MERGE_BASE=$(git rev-parse HEAD~) 2024-08-07T17:52:16.0755661Z else 2024-08-07T17:52:16.0756115Z  # otherwise we're on a PR, so use the most recent base commit 2024-08-07T17:52:16.0756739Z  MERGE_BASE=$(git merge-base HEAD "$BASE_REVISION") 2024-08-07T17:52:16.0757234Z fi 2024-08-07T17:52:16.0757529Z  2024-08-07T17:52:16.0757840Z if [[ -z "${MERGE_BASE}" ]]; then 2024-08-07T17:52:16.0758329Z  echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2024-08-07T17:52:16.0758785Z  2024-08-07T17:52:16.0759400Z  echo "Finding merge base only works with full checkout, please set fetch-depth to 0, continuing ..." 2024-08-07T17:52:16.0760137Z  exit 0 2024-08-07T17:52:16.0760450Z fi 2024-08-07T17:52:16.0760724Z  2024-08-07T17:52:16.0761148Z if ! git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}"; then 2024-08-07T17:52:16.0762069Z  echo "Directory '${DOCKER_BUILD_DIR}' not found in commit $MERGE_BASE, you should rebase onto a more recent commit" 2024-08-07T17:52:16.0762838Z  exit 1 2024-08-07T17:52:16.0763156Z fi 2024-08-07T17:52:16.0763449Z  2024-08-07T17:52:16.0763919Z PREVIOUS_DOCKER_TAG=$(git rev-parse "${MERGE_BASE}:${DOCKER_BUILD_DIR}") 2024-08-07T17:52:16.0764792Z # If no image exists but the hash is the same as the previous hash then we should error out here 2024-08-07T17:52:16.0765581Z if [[ "${PREVIOUS_DOCKER_TAG}" == "${DOCKER_TAG}" ]]; then 2024-08-07T17:52:16.0766495Z  echo "WARNING: Something has gone wrong and the previous image isn't available for the merge-base of your branch" 2024-08-07T17:52:16.0767484Z  echo " Will re-build docker image to store in local cache, TTS may be longer" 2024-08-07T17:52:16.0768107Z fi 2024-08-07T17:52:16.0768402Z  2024-08-07T17:52:16.0768873Z echo "rebuild=true" >> "${GITHUB_OUTPUT}" 2024-08-07T17:52:16.0775560Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:52:16.0776040Z env: 2024-08-07T17:52:16.0776309Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:16.0776664Z DOCKER_BUILD_DIR: .ci/docker 2024-08-07T17:52:16.0777094Z BASE_REVISION: 6ce09a9bb33e4011761558032e2165ad7b49fb68 2024-08-07T17:52:16.0778106Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:16.0779139Z DOCKER_TAG: 02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:16.0779706Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:16.0780174Z ##[endgroup] 2024-08-07T17:52:16.0808613Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:16.0809510Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:16.0811827Z + aws ecr get-login-password --region us-east-1 2024-08-07T17:52:16.0813113Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:16.8059586Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2024-08-07T17:52:16.8060437Z Configure a credential helper to remove this warning. See 2024-08-07T17:52:16.8061834Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2024-08-07T17:52:16.8062484Z 2024-08-07T17:52:16.8062623Z Login Succeeded 2024-08-07T17:52:16.8075056Z + docker manifest inspect 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:17.0658033Z { 2024-08-07T17:52:17.0658596Z "schemaVersion": 2, 2024-08-07T17:52:17.0659554Z "mediaType": "application/vnd.docker.distribution.manifest.v2+json", 2024-08-07T17:52:17.0660501Z "config": { 2024-08-07T17:52:17.0661217Z "mediaType": "application/vnd.docker.container.image.v1+json", 2024-08-07T17:52:17.0662074Z "size": 48439, 2024-08-07T17:52:17.0663403Z "digest": "sha256:6ec36276acd88c9be8b44d856744037d399b35f4bb1703e637c27ae2b254c901" 2024-08-07T17:52:17.0664538Z }, 2024-08-07T17:52:17.0664993Z "layers": [ 2024-08-07T17:52:17.0665479Z { 2024-08-07T17:52:17.0666246Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0667262Z "size": 28580681, 2024-08-07T17:52:17.0668168Z "digest": "sha256:7a2c559011895d255fce249c00396abff5ae7e0c0a92931d0ed493e71de78e3a" 2024-08-07T17:52:17.0669295Z }, 2024-08-07T17:52:17.0669722Z { 2024-08-07T17:52:17.0670427Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0671303Z "size": 7943451, 2024-08-07T17:52:17.0672202Z "digest": "sha256:224fe954d7252f10539d243d6c9688806f7d13ad775ed02e7f7c79077844510d" 2024-08-07T17:52:17.0673266Z }, 2024-08-07T17:52:17.0673687Z { 2024-08-07T17:52:17.0674487Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0675503Z "size": 55728572, 2024-08-07T17:52:17.0676555Z "digest": "sha256:75722010b82e31715876aeeed0b2cee414296f0124fdfa061ab845ba2a158450" 2024-08-07T17:52:17.0677744Z }, 2024-08-07T17:52:17.0678175Z { 2024-08-07T17:52:17.0678927Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0679885Z "size": 186, 2024-08-07T17:52:17.0680477Z "digest": "sha256:d527cbbb87e3016fd72a18a9b468c945ad0ca27c5770b39debd6ed704db3a195" 2024-08-07T17:52:17.0681115Z }, 2024-08-07T17:52:17.0681387Z { 2024-08-07T17:52:17.0681804Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0682349Z "size": 6886, 2024-08-07T17:52:17.0683208Z "digest": "sha256:b57676e46aee1a8c82e528d78e5a13e31142524eea31c8b213d69ddcb6f1fe80" 2024-08-07T17:52:17.0683866Z }, 2024-08-07T17:52:17.0684127Z { 2024-08-07T17:52:17.0684562Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0685332Z "size": 1329001756, 2024-08-07T17:52:17.0685950Z "digest": "sha256:a8c1e85b5e14cec7af70bf304cb4d4cee6a1d25eb8215b2cf4fdc33e5af5e108" 2024-08-07T17:52:17.0686605Z }, 2024-08-07T17:52:17.0686855Z { 2024-08-07T17:52:17.0687300Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0687863Z "size": 62501, 2024-08-07T17:52:17.0688395Z "digest": "sha256:a41a8d1c11c8d80fe4e82b0d05478f8d51176ff20b8350905fc1b25c93a51198" 2024-08-07T17:52:17.0689022Z }, 2024-08-07T17:52:17.0689293Z { 2024-08-07T17:52:17.0689719Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0690286Z "size": 1684, 2024-08-07T17:52:17.0690826Z "digest": "sha256:0c12278907551c2962927d27c115f6f7bf0df894318b8aea6ece3ef01ccd0a8a" 2024-08-07T17:52:17.0691422Z }, 2024-08-07T17:52:17.0691694Z { 2024-08-07T17:52:17.0692137Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0692675Z "size": 1523, 2024-08-07T17:52:17.0693255Z "digest": "sha256:d8d1234baab3ec9ccb8bb710fc6b8ff6c10896ba2e8d27a347583eca770f9ff1" 2024-08-07T17:52:17.0693901Z }, 2024-08-07T17:52:17.0694274Z + exit 0 2024-08-07T17:52:17.0694549Z { 2024-08-07T17:52:17.0695410Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0696052Z "size": 2528295403, 2024-08-07T17:52:17.0696605Z "digest": "sha256:7ed32bc8e4696fcdb2feef850781160597b2275ad756819c4add88236b0577d5" 2024-08-07T17:52:17.0697228Z }, 2024-08-07T17:52:17.0697479Z { 2024-08-07T17:52:17.0697922Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0698478Z "size": 86016, 2024-08-07T17:52:17.0699017Z "digest": "sha256:ec1e7978c1fe161ced1d98092a51e7c5953ca5fda5577f54df9dbda4afff1b2b" 2024-08-07T17:52:17.0699641Z }, 2024-08-07T17:52:17.0699910Z { 2024-08-07T17:52:17.0700332Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0700884Z "size": 1823, 2024-08-07T17:52:17.0701434Z "digest": "sha256:66b43372aa397c4303ca4e0e1122516909bca0c87b9b4bfb3972b8fd0c1d4390" 2024-08-07T17:52:17.0702259Z }, 2024-08-07T17:52:17.0702522Z { 2024-08-07T17:52:17.0702959Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0703503Z "size": 246768020, 2024-08-07T17:52:17.0704061Z "digest": "sha256:b6662193c745ec6b991e800e920c233379c7c0e74f2f64d9b82dd5dc4a27eb14" 2024-08-07T17:52:17.0704672Z }, 2024-08-07T17:52:17.0704920Z { 2024-08-07T17:52:17.0705362Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0705923Z "size": 545, 2024-08-07T17:52:17.0706478Z "digest": "sha256:5be2b638d110dd5ed631ce7ddf7eefa26b3abd49cf3ab845be5ecb3daec46b67" 2024-08-07T17:52:17.0707112Z }, 2024-08-07T17:52:17.0707359Z { 2024-08-07T17:52:17.0707807Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0708361Z "size": 1283, 2024-08-07T17:52:17.0708903Z "digest": "sha256:71ca63790839b9bfa870ee6927d5d7b60aaa1fc65b38d3e8fc42ace8911859ef" 2024-08-07T17:52:17.0709554Z }, 2024-08-07T17:52:17.0709819Z { 2024-08-07T17:52:17.0710245Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0710804Z "size": 484, 2024-08-07T17:52:17.0711348Z "digest": "sha256:8a74804dc4fa9ad5369e1ae6677a4e17bcc2c53d209a67738dbc795420066650" 2024-08-07T17:52:17.0711947Z }, 2024-08-07T17:52:17.0712214Z { 2024-08-07T17:52:17.0712664Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0713233Z "size": 91712377, 2024-08-07T17:52:17.0713801Z "digest": "sha256:3bacb5389b745ab1f7590db3db714e689a99ee0d7c709f907ccd6906f39905c5" 2024-08-07T17:52:17.0714425Z }, 2024-08-07T17:52:17.0714673Z { 2024-08-07T17:52:17.0715118Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0715681Z "size": 3231, 2024-08-07T17:52:17.0716325Z "digest": "sha256:a8911a72541a4ab35894015b7fb1174ea61c59fedc863dfa563324af5d6ae752" 2024-08-07T17:52:17.0716972Z }, 2024-08-07T17:52:17.0717247Z { 2024-08-07T17:52:17.0717675Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0718257Z "size": 1909, 2024-08-07T17:52:17.0718793Z "digest": "sha256:55d020986bb7c1702235b111c4b83d990fa63ce6045c5ac358a026832bbe8550" 2024-08-07T17:52:17.0719387Z }, 2024-08-07T17:52:17.0719657Z { 2024-08-07T17:52:17.0720138Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0720650Z "size": 700, 2024-08-07T17:52:17.0721174Z "digest": "sha256:679e209a81f89d0be588ce19c3f5191f73883a86e44ab7b3653a3be3f267b69e" 2024-08-07T17:52:17.0721788Z }, 2024-08-07T17:52:17.0748164Z { 2024-08-07T17:52:17.0748654Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0749165Z "size": 2785856582, 2024-08-07T17:52:17.0749750Z "digest": "sha256:d4fb7093f54f7f71e63223ef934b9ab258d53922a199ac4736897cdb90df0683" 2024-08-07T17:52:17.0750431Z }, 2024-08-07T17:52:17.0750659Z { 2024-08-07T17:52:17.0751059Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0751568Z "size": 381, 2024-08-07T17:52:17.0752074Z "digest": "sha256:0d8ab4023e81a9284aef759a1b3c759a907d0cbd39361f3ef0ce4f8c3994f882" 2024-08-07T17:52:17.0752663Z }, 2024-08-07T17:52:17.0752899Z { 2024-08-07T17:52:17.0753369Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0753864Z "size": 12876, 2024-08-07T17:52:17.0754368Z "digest": "sha256:bf191f5f5a0a370ba7136fa618cd8cb1eb76e5f82b8c5773a965cdd105515924" 2024-08-07T17:52:17.0754966Z }, 2024-08-07T17:52:17.0755192Z { 2024-08-07T17:52:17.0755603Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0756130Z "size": 803, 2024-08-07T17:52:17.0756638Z "digest": "sha256:14653e4e245feef24e0aabd8a4cd81c24298f800facc0299f797b161da696a1d" 2024-08-07T17:52:17.0757247Z }, 2024-08-07T17:52:17.0757484Z { 2024-08-07T17:52:17.0757904Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0758604Z "size": 106, 2024-08-07T17:52:17.0759122Z "digest": "sha256:8bdbb000c39dd99342429f8a1183bdb36f312b532ea7e47eb7719fea84c669f6" 2024-08-07T17:52:17.0759703Z }, 2024-08-07T17:52:17.0759932Z { 2024-08-07T17:52:17.0760344Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0760868Z "size": 504, 2024-08-07T17:52:17.0761369Z "digest": "sha256:277383b63c0797c1bd9e23c6f38d6ba85e6e321e2dc6b21fcd832f1935f5af87" 2024-08-07T17:52:17.0761950Z }, 2024-08-07T17:52:17.0762181Z { 2024-08-07T17:52:17.0762583Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0763108Z "size": 121477300, 2024-08-07T17:52:17.0763632Z "digest": "sha256:890313244493db7d65ed3f1cf91a94e6e50bbdb4df87b5bb829a1a3236ffaeb3" 2024-08-07T17:52:17.0764207Z }, 2024-08-07T17:52:17.0764436Z { 2024-08-07T17:52:17.0764852Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0765379Z "size": 109, 2024-08-07T17:52:17.0765906Z "digest": "sha256:f1e3cc0f57ee16caa6ffefa72c065dfe99a5d19a3a352342dfa26b63661589a2" 2024-08-07T17:52:17.0766509Z }, 2024-08-07T17:52:17.0766731Z { 2024-08-07T17:52:17.0767144Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0767666Z "size": 491, 2024-08-07T17:52:17.0768182Z "digest": "sha256:c3cbae3fe054ce8c713ed90c42a306ecc164d8256fd73a14ff7b0e088e150b3f" 2024-08-07T17:52:17.0768774Z }, 2024-08-07T17:52:17.0769001Z { 2024-08-07T17:52:17.0769402Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0769925Z "size": 296, 2024-08-07T17:52:17.0770448Z "digest": "sha256:ccc148c4e7590ced33e52f40edecd2d5ec73cb4a42c87dacaf5c5a7a3912c17b" 2024-08-07T17:52:17.0771041Z }, 2024-08-07T17:52:17.0771270Z { 2024-08-07T17:52:17.0771775Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0772314Z "size": 103, 2024-08-07T17:52:17.0772838Z "digest": "sha256:7912f8c8e80ddc0dfc068c1282e6bd0ffd098b02458818c2c7b52a89c41d8335" 2024-08-07T17:52:17.0773467Z }, 2024-08-07T17:52:17.0773698Z { 2024-08-07T17:52:17.0774108Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0774628Z "size": 1473, 2024-08-07T17:52:17.0775131Z "digest": "sha256:d166ebb28213d6d30940b4fb9739863e9200174e7b550a1591e9028b1a039f83" 2024-08-07T17:52:17.0775704Z }, 2024-08-07T17:52:17.0775929Z { 2024-08-07T17:52:17.0776334Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0776856Z "size": 424146463, 2024-08-07T17:52:17.0777440Z "digest": "sha256:63bf315f789a755602aeb163e43e8173bc191c3dabc75e39ab31d0762bacc84f" 2024-08-07T17:52:17.0778008Z }, 2024-08-07T17:52:17.0778233Z { 2024-08-07T17:52:17.0778617Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0779119Z "size": 159, 2024-08-07T17:52:17.0779604Z "digest": "sha256:bdb818f7b2c8404f3e19777a27592349798986185a1f5b539309bbe8ea96e513" 2024-08-07T17:52:17.0780186Z }, 2024-08-07T17:52:17.0780416Z { 2024-08-07T17:52:17.0780821Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0781338Z "size": 566, 2024-08-07T17:52:17.0781852Z "digest": "sha256:89d8aea05b3a5e45fc1c48daf5ac32901006f7804ce5f2104112c2a2136acf28" 2024-08-07T17:52:17.0782438Z }, 2024-08-07T17:52:17.0782663Z { 2024-08-07T17:52:17.0783065Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0783585Z "size": 35874371, 2024-08-07T17:52:17.0784101Z "digest": "sha256:f1122e19f79064bde97285bf17ca6d8abb889972e5d95a463ffd2382145c1f22" 2024-08-07T17:52:17.0784677Z }, 2024-08-07T17:52:17.0784906Z { 2024-08-07T17:52:17.0785304Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0785827Z "size": 104, 2024-08-07T17:52:17.0786330Z "digest": "sha256:13d6ce3185e9912952041a572e2efa85b4544ec540f6050f750093d180a069f6" 2024-08-07T17:52:17.0787030Z }, 2024-08-07T17:52:17.0787258Z { 2024-08-07T17:52:17.0787665Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0788184Z "size": 425, 2024-08-07T17:52:17.0788697Z "digest": "sha256:feb3f80c392d4aef71730a9673030e955ce0e8a5c41f350eb7a00592d6b0dbb3" 2024-08-07T17:52:17.0789282Z }, 2024-08-07T17:52:17.0789507Z { 2024-08-07T17:52:17.0789913Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0790431Z "size": 20262075, 2024-08-07T17:52:17.0790958Z "digest": "sha256:4fe4cdcdfbd890964b8270a9140a5bf255709a21af4401b0428d91a735e8ac12" 2024-08-07T17:52:17.0791541Z }, 2024-08-07T17:52:17.0791766Z { 2024-08-07T17:52:17.0792173Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0792697Z "size": 440, 2024-08-07T17:52:17.0793211Z "digest": "sha256:be10b99d8ac8cfa04842a726627b1bdc764d3b6f1c591dca7933b86c93208c66" 2024-08-07T17:52:17.0793816Z }, 2024-08-07T17:52:17.0794045Z { 2024-08-07T17:52:17.0794448Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0794971Z "size": 700, 2024-08-07T17:52:17.0795943Z "digest": "sha256:679e209a81f89d0be588ce19c3f5191f73883a86e44ab7b3653a3be3f267b69e" 2024-08-07T17:52:17.0796496Z }, 2024-08-07T17:52:17.0796723Z { 2024-08-07T17:52:17.0797128Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0797646Z "size": 143, 2024-08-07T17:52:17.0798158Z "digest": "sha256:5980a36dfe02695abaecfad21a248c6c1902b07b2c9b69c61c39e342994e2f91" 2024-08-07T17:52:17.0798745Z }, 2024-08-07T17:52:17.0798968Z { 2024-08-07T17:52:17.0799375Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0799894Z "size": 135, 2024-08-07T17:52:17.0800571Z "digest": "sha256:94a4e0b3f19a399451a5f3cc7ddbde73ea16a7f180f7f047bf3ad868072c173f" 2024-08-07T17:52:17.0801193Z }, 2024-08-07T17:52:17.0801432Z { 2024-08-07T17:52:17.0801830Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0802352Z "size": 32, 2024-08-07T17:52:17.0802861Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2024-08-07T17:52:17.0803445Z }, 2024-08-07T17:52:17.0803701Z { 2024-08-07T17:52:17.0804140Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0804676Z "size": 189, 2024-08-07T17:52:17.0805207Z "digest": "sha256:2012c603f15449503b4671093a9ba6aff4fc99cf4923a92bb446fde7e52d59ee" 2024-08-07T17:52:17.0805817Z }, 2024-08-07T17:52:17.0806062Z { 2024-08-07T17:52:17.0806502Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0807036Z "size": 563, 2024-08-07T17:52:17.0807577Z "digest": "sha256:060890aa9610c5ec0050f85cafaa1f010ff178e2c8b0600aa3c43ad37ed48976" 2024-08-07T17:52:17.0808193Z }, 2024-08-07T17:52:17.0808445Z { 2024-08-07T17:52:17.0808888Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0809456Z "size": 43163116, 2024-08-07T17:52:17.0810002Z "digest": "sha256:c1a64eb8ee12a08340fb5c5a87dc012ff3074a8b683cc399feaa431de7402abd" 2024-08-07T17:52:17.0810626Z }, 2024-08-07T17:52:17.0810894Z { 2024-08-07T17:52:17.0811313Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0811860Z "size": 106, 2024-08-07T17:52:17.0812415Z "digest": "sha256:ed7686d06f1d744c9ec6dd0d75ae1581baefd7809deef8aefa11d54945c7888f" 2024-08-07T17:52:17.0813025Z }, 2024-08-07T17:52:17.0813287Z { 2024-08-07T17:52:17.0813725Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0814260Z "size": 1212, 2024-08-07T17:52:17.0814807Z "digest": "sha256:5c40be0141236773ddf2a3127f247bcc22540d4bebf4f3cc1df53f16f629ee35" 2024-08-07T17:52:17.0815424Z }, 2024-08-07T17:52:17.0815671Z { 2024-08-07T17:52:17.0816116Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0816852Z "size": 700, 2024-08-07T17:52:17.0817382Z "digest": "sha256:679e209a81f89d0be588ce19c3f5191f73883a86e44ab7b3653a3be3f267b69e" 2024-08-07T17:52:17.0817998Z }, 2024-08-07T17:52:17.0818263Z { 2024-08-07T17:52:17.0818689Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0819247Z "size": 138, 2024-08-07T17:52:17.0819838Z "digest": "sha256:95c1963010edc97c994c12e530dc7e5a5717123dfc4378fe8ecca9dbf79de394" 2024-08-07T17:52:17.0820426Z }, 2024-08-07T17:52:17.0820691Z { 2024-08-07T17:52:17.0821136Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0821673Z "size": 120, 2024-08-07T17:52:17.0822207Z "digest": "sha256:5805001913689846871dcb66b59a8d496e3c78fbf4b46c0c55cb11629af04779" 2024-08-07T17:52:17.0822812Z }, 2024-08-07T17:52:17.0823064Z { 2024-08-07T17:52:17.0823521Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0824086Z "size": 1916657670, 2024-08-07T17:52:17.0824648Z "digest": "sha256:b826637ebc384c2f2efbdc841bf6b8f0ac9b6a85060cab5d171d8ed8d49dd3de" 2024-08-07T17:52:17.0825278Z }, 2024-08-07T17:52:17.0825526Z { 2024-08-07T17:52:17.0825973Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0826531Z "size": 173, 2024-08-07T17:52:17.0827040Z "digest": "sha256:859f9c7a63754c26422062903f2a9991578a62fe2f1a81c9ad0f0e9517ab7387" 2024-08-07T17:52:17.0827639Z }, 2024-08-07T17:52:17.0827902Z { 2024-08-07T17:52:17.0828320Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0828877Z "size": 908, 2024-08-07T17:52:17.0829420Z "digest": "sha256:b89ac1530c4a96d2c4c0626a5202eb9e9a05e0d08517e1a5bf165257505309e8" 2024-08-07T17:52:17.0830014Z }, 2024-08-07T17:52:17.0830282Z { 2024-08-07T17:52:17.0830723Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0831367Z "size": 700, 2024-08-07T17:52:17.0831937Z "digest": "sha256:679e209a81f89d0be588ce19c3f5191f73883a86e44ab7b3653a3be3f267b69e" 2024-08-07T17:52:17.0832549Z }, 2024-08-07T17:52:17.0832795Z { 2024-08-07T17:52:17.0833241Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0833829Z "size": 134, 2024-08-07T17:52:17.0834365Z "digest": "sha256:4f10deed2e003fe5f78af780a6b4a71d0107d3ee59e41de3f35c031ca08e9d4d" 2024-08-07T17:52:17.0834996Z }, 2024-08-07T17:52:17.0835261Z { 2024-08-07T17:52:17.0835680Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0836237Z "size": 32, 2024-08-07T17:52:17.0836795Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2024-08-07T17:52:17.0837402Z }, 2024-08-07T17:52:17.0837665Z { 2024-08-07T17:52:17.0838116Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0838657Z "size": 156, 2024-08-07T17:52:17.0839194Z "digest": "sha256:336420751f1de11d750328660fdb6ebb9051881d009d399e893c10d61ba69b0c" 2024-08-07T17:52:17.0839815Z }, 2024-08-07T17:52:17.0840063Z { 2024-08-07T17:52:17.0840503Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0841059Z "size": 1841, 2024-08-07T17:52:17.0841581Z "digest": "sha256:f7f49611427c9bdc74d97703f780519d1d7d2b95a5377f6f625c8884cbc21d4e" 2024-08-07T17:52:17.0842194Z }, 2024-08-07T17:52:17.0842457Z { 2024-08-07T17:52:17.0842878Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0843429Z "size": 7529783, 2024-08-07T17:52:17.0843962Z "digest": "sha256:628b460c253a663b1f76b99fd2f00d63872fce39b0830a3b45bdeec4f5244660" 2024-08-07T17:52:17.0844577Z }, 2024-08-07T17:52:17.0844838Z { 2024-08-07T17:52:17.0845258Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0845814Z "size": 164, 2024-08-07T17:52:17.0846357Z "digest": "sha256:98e88ff103238559de2c0c76e43d76a01b94584edee356532b7723d1fd39dd85" 2024-08-07T17:52:17.0847064Z }, 2024-08-07T17:52:17.0847325Z { 2024-08-07T17:52:17.0847768Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0848312Z "size": 7944, 2024-08-07T17:52:17.0848862Z "digest": "sha256:6abf825f7962d4bc769dde6a63a4132694ecb9ba0f17006085d8c339aeedf887" 2024-08-07T17:52:17.0849489Z }, 2024-08-07T17:52:17.0849735Z { 2024-08-07T17:52:17.0850176Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0850733Z "size": 8063, 2024-08-07T17:52:17.0851264Z "digest": "sha256:844414c41546bd3c4dd14a45bbd58cca4a2aa0e8f37a781f8c386736ae4d4081" 2024-08-07T17:52:17.0851882Z }, 2024-08-07T17:52:17.0852146Z { 2024-08-07T17:52:17.0852568Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0853125Z "size": 300, 2024-08-07T17:52:17.0853671Z "digest": "sha256:b92a0d83e22950e600ffd0f6391f5c20b499107ba973cab4d7a54a5c65a922b1" 2024-08-07T17:52:17.0854279Z }, 2024-08-07T17:52:17.0854556Z { 2024-08-07T17:52:17.0854992Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0855528Z "size": 7629841, 2024-08-07T17:52:17.0856071Z "digest": "sha256:56e4340bc9e3886f7c099a66772a040a2d34cf0782746af58b0317af979cdfa3" 2024-08-07T17:52:17.0856679Z }, 2024-08-07T17:52:17.0856925Z { 2024-08-07T17:52:17.0857363Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0857920Z "size": 108, 2024-08-07T17:52:17.0858428Z "digest": "sha256:26f48d882588278c8763af295a6bc7147c492d82c6e4395970856a29fb8d77f0" 2024-08-07T17:52:17.0859028Z }, 2024-08-07T17:52:17.0859292Z { 2024-08-07T17:52:17.0859717Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0860269Z "size": 54145778, 2024-08-07T17:52:17.0860836Z "digest": "sha256:b6fe2821ba25ab984577df156aed9b873699ef0f46b6230d8e9a54f9ee22be1e" 2024-08-07T17:52:17.0861526Z }, 2024-08-07T17:52:17.0861805Z { 2024-08-07T17:52:17.0862243Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0862803Z "size": 473, 2024-08-07T17:52:17.0863354Z "digest": "sha256:fae8722cca7f32933f7a25f1491c31ea9a6df4fc1f9fb2360bd29c79b04f1c56" 2024-08-07T17:52:17.0863966Z }, 2024-08-07T17:52:17.0864236Z { 2024-08-07T17:52:17.0864681Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0865221Z "size": 1374858912, 2024-08-07T17:52:17.0865793Z "digest": "sha256:3c7c25c582fced622823798bd877a7fb903ebd4bfecd93c32e43dbd536bb8202" 2024-08-07T17:52:17.0866407Z }, 2024-08-07T17:52:17.0866660Z { 2024-08-07T17:52:17.0867101Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0867663Z "size": 106, 2024-08-07T17:52:17.0868193Z "digest": "sha256:75a49c2f3f0a99be9760740cce745e1ffd508a15bf5ef08077b2032b4d4d97ce" 2024-08-07T17:52:17.0868818Z }, 2024-08-07T17:52:17.0869096Z { 2024-08-07T17:52:17.0869523Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0870090Z "size": 558, 2024-08-07T17:52:17.0870663Z "digest": "sha256:b32c97699ecde27b65bfbbd8ba207755eb28584f2fc64501f4a320045ae969c8" 2024-08-07T17:52:17.0871269Z }, 2024-08-07T17:52:17.0871536Z { 2024-08-07T17:52:17.0871978Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0872523Z "size": 46248557, 2024-08-07T17:52:17.0873067Z "digest": "sha256:b926a85168171349a0ff57c87aa52b9174d3704512eb2e687184f5552883312a" 2024-08-07T17:52:17.0873668Z }, 2024-08-07T17:52:17.0873913Z { 2024-08-07T17:52:17.0874350Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0874905Z "size": 111, 2024-08-07T17:52:17.0875431Z "digest": "sha256:1c5d35b9a7607fd72af03dc281fe78215973e63c51c1b823a704727c8a0944eb" 2024-08-07T17:52:17.0876045Z }, 2024-08-07T17:52:17.0876312Z { 2024-08-07T17:52:17.0876747Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0877418Z "size": 32, 2024-08-07T17:52:17.0877965Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2024-08-07T17:52:17.0878574Z }, 2024-08-07T17:52:17.0878877Z { 2024-08-07T17:52:17.0879321Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0879864Z "size": 32, 2024-08-07T17:52:17.0880415Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2024-08-07T17:52:17.0881040Z }, 2024-08-07T17:52:17.0881289Z { 2024-08-07T17:52:17.0881729Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0882268Z "size": 32, 2024-08-07T17:52:17.0882821Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2024-08-07T17:52:17.0883448Z }, 2024-08-07T17:52:17.0883693Z { 2024-08-07T17:52:17.0884139Z "mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip", 2024-08-07T17:52:17.0884704Z "size": 32, 2024-08-07T17:52:17.0885246Z "digest": "sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1" 2024-08-07T17:52:17.0885876Z } 2024-08-07T17:52:17.0886138Z ] 2024-08-07T17:52:17.0886379Z } 2024-08-07T17:52:17.0968783Z ##[group]Run tag=${ECR_DOCKER_IMAGE##*/} 2024-08-07T17:52:17.0969297Z tag=${ECR_DOCKER_IMAGE##*/} 2024-08-07T17:52:17.0969809Z echo "docker pull ghcr.io/pytorch/ci-image:${tag/:/-}" 2024-08-07T17:52:17.0976862Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:52:17.0977329Z env: 2024-08-07T17:52:17.0977621Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:17.0978569Z ECR_DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:17.0979550Z ##[endgroup] 2024-08-07T17:52:17.1011126Z docker pull ghcr.io/pytorch/ci-image:pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9-02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:17.1072148Z ##[group]Run pytorch/test-infra/.github/actions/pull-docker-image@main 2024-08-07T17:52:17.1072792Z with: 2024-08-07T17:52:17.1073736Z docker-image: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:17.1074906Z docker-registry: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:17.1075411Z env: 2024-08-07T17:52:17.1075707Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:17.1076080Z ##[endgroup] 2024-08-07T17:52:17.1108042Z ##[group]Run set -x 2024-08-07T17:52:17.1108432Z set -x 2024-08-07T17:52:17.1108737Z set +e 2024-08-07T17:52:17.1109056Z  2024-08-07T17:52:17.1109356Z login() { 2024-08-07T17:52:17.1109975Z  aws ecr get-login-password --region us-east-1 | docker login -u AWS --password-stdin "$1" 2024-08-07T17:52:17.1110669Z } 2024-08-07T17:52:17.1110968Z  2024-08-07T17:52:17.1111331Z retry () { 2024-08-07T17:52:17.1111723Z  $* || (sleep 1 && $*) || (sleep 2 && $*) 2024-08-07T17:52:17.1112162Z } 2024-08-07T17:52:17.1112455Z  2024-08-07T17:52:17.1112765Z retry login "${DOCKER_REGISTRY}" 2024-08-07T17:52:17.1113188Z  2024-08-07T17:52:17.1113481Z set -e 2024-08-07T17:52:17.1113934Z # ignore output since only exit code is used for conditional 2024-08-07T17:52:17.1114607Z # only pull docker image if it's not available locally 2024-08-07T17:52:17.1115347Z if ! docker inspect --type=image "${DOCKER_IMAGE}" >/dev/null 2>/dev/null; then 2024-08-07T17:52:17.1116014Z  retry docker pull "${DOCKER_IMAGE}" 2024-08-07T17:52:17.1116460Z fi 2024-08-07T17:52:17.1123142Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:52:17.1123643Z env: 2024-08-07T17:52:17.1123934Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:52:17.1124865Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:17.1126175Z DOCKER_REGISTRY: 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:17.1126670Z ##[endgroup] 2024-08-07T17:52:17.1153344Z + set +e 2024-08-07T17:52:17.1153749Z + retry login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:17.1154277Z + login 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:17.1157616Z + aws ecr get-login-password --region us-east-1 2024-08-07T17:52:17.1158890Z + docker login -u AWS --password-stdin 308535385114.dkr.ecr.us-east-1.amazonaws.com 2024-08-07T17:52:17.8157253Z WARNING! Your password will be stored unencrypted in /home/ec2-user/.docker/config.json. 2024-08-07T17:52:17.8158034Z Configure a credential helper to remove this warning. See 2024-08-07T17:52:17.8158767Z https://docs.docker.com/engine/reference/commandline/login/#credentials-store 2024-08-07T17:52:17.8159687Z 2024-08-07T17:52:17.8159875Z Login Succeeded 2024-08-07T17:52:17.8175340Z + set -e 2024-08-07T17:52:17.8176310Z + docker inspect --type=image 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:17.8351897Z + retry docker pull 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:17.8353472Z + docker pull 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:52:18.0793335Z 02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9: Pulling from pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9 2024-08-07T17:52:18.0795414Z 7a2c55901189: Pulling fs layer 2024-08-07T17:52:18.0796234Z 224fe954d725: Pulling fs layer 2024-08-07T17:52:18.0796983Z 75722010b82e: Pulling fs layer 2024-08-07T17:52:18.0798011Z d527cbbb87e3: Pulling fs layer 2024-08-07T17:52:18.0799230Z b57676e46aee: Pulling fs layer 2024-08-07T17:52:18.0800036Z a8c1e85b5e14: Pulling fs layer 2024-08-07T17:52:18.0800566Z a41a8d1c11c8: Pulling fs layer 2024-08-07T17:52:18.0801123Z 0c1227890755: Pulling fs layer 2024-08-07T17:52:18.0801821Z d8d1234baab3: Pulling fs layer 2024-08-07T17:52:18.0802533Z 7ed32bc8e469: Pulling fs layer 2024-08-07T17:52:18.0803246Z ec1e7978c1fe: Pulling fs layer 2024-08-07T17:52:18.0804409Z 66b43372aa39: Pulling fs layer 2024-08-07T17:52:18.0804871Z a8c1e85b5e14: Waiting 2024-08-07T17:52:18.0805486Z b6662193c745: Pulling fs layer 2024-08-07T17:52:18.0805967Z 5be2b638d110: Pulling fs layer 2024-08-07T17:52:18.0806634Z d527cbbb87e3: Waiting 2024-08-07T17:52:18.0807308Z a41a8d1c11c8: Waiting 2024-08-07T17:52:18.0807847Z 71ca63790839: Pulling fs layer 2024-08-07T17:52:18.0808202Z 8a74804dc4fa: Pulling fs layer 2024-08-07T17:52:18.0808578Z 3bacb5389b74: Pulling fs layer 2024-08-07T17:52:18.0808950Z a8911a72541a: Pulling fs layer 2024-08-07T17:52:18.0809310Z 55d020986bb7: Pulling fs layer 2024-08-07T17:52:18.0809673Z 66b43372aa39: Waiting 2024-08-07T17:52:18.0810006Z 679e209a81f8: Pulling fs layer 2024-08-07T17:52:18.0810342Z d8d1234baab3: Waiting 2024-08-07T17:52:18.0810729Z d4fb7093f54f: Pulling fs layer 2024-08-07T17:52:18.0811090Z 8a74804dc4fa: Waiting 2024-08-07T17:52:18.0811402Z 0d8ab4023e81: Pulling fs layer 2024-08-07T17:52:18.0811787Z b6662193c745: Waiting 2024-08-07T17:52:18.0812573Z a8911a72541a: Waiting 2024-08-07T17:52:18.0812922Z 679e209a81f8: Waiting 2024-08-07T17:52:18.0813235Z 71ca63790839: Waiting 2024-08-07T17:52:18.0813547Z 3bacb5389b74: Waiting 2024-08-07T17:52:18.0813864Z bf191f5f5a0a: Pulling fs layer 2024-08-07T17:52:18.0814230Z 14653e4e245f: Pulling fs layer 2024-08-07T17:52:18.0814579Z 55d020986bb7: Waiting 2024-08-07T17:52:18.0814886Z 8bdbb000c39d: Pulling fs layer 2024-08-07T17:52:18.0815251Z 277383b63c07: Pulling fs layer 2024-08-07T17:52:18.0815592Z 890313244493: Pulling fs layer 2024-08-07T17:52:18.0815960Z f1e3cc0f57ee: Pulling fs layer 2024-08-07T17:52:18.0816328Z 5be2b638d110: Waiting 2024-08-07T17:52:18.0816901Z d4fb7093f54f: Waiting 2024-08-07T17:52:18.0817231Z c3cbae3fe054: Pulling fs layer 2024-08-07T17:52:18.0817600Z ccc148c4e759: Pulling fs layer 2024-08-07T17:52:18.0817935Z 277383b63c07: Waiting 2024-08-07T17:52:18.0818243Z 0c1227890755: Waiting 2024-08-07T17:52:18.0818567Z 7912f8c8e80d: Pulling fs layer 2024-08-07T17:52:18.0818914Z 8bdbb000c39d: Waiting 2024-08-07T17:52:18.0819240Z d166ebb28213: Pulling fs layer 2024-08-07T17:52:18.0819625Z f1e3cc0f57ee: Waiting 2024-08-07T17:52:18.0820278Z c3cbae3fe054: Waiting 2024-08-07T17:52:18.0820858Z bf191f5f5a0a: Waiting 2024-08-07T17:52:18.0821463Z 63bf315f789a: Pulling fs layer 2024-08-07T17:52:18.0822095Z ccc148c4e759: Waiting 2024-08-07T17:52:18.0822690Z bdb818f7b2c8: Pulling fs layer 2024-08-07T17:52:18.0823696Z 89d8aea05b3a: Pulling fs layer 2024-08-07T17:52:18.0824373Z f1122e19f790: Pulling fs layer 2024-08-07T17:52:18.0825432Z 63bf315f789a: Waiting 2024-08-07T17:52:18.0826014Z bdb818f7b2c8: Waiting 2024-08-07T17:52:18.0826604Z 13d6ce3185e9: Pulling fs layer 2024-08-07T17:52:18.0827190Z feb3f80c392d: Pulling fs layer 2024-08-07T17:52:18.0827741Z 4fe4cdcdfbd8: Pulling fs layer 2024-08-07T17:52:18.0828382Z be10b99d8ac8: Pulling fs layer 2024-08-07T17:52:18.0829054Z 7ed32bc8e469: Waiting 2024-08-07T17:52:18.0829634Z 13d6ce3185e9: Waiting 2024-08-07T17:52:18.0830215Z f1122e19f790: Waiting 2024-08-07T17:52:18.0830837Z 5980a36dfe02: Pulling fs layer 2024-08-07T17:52:18.0831432Z be10b99d8ac8: Waiting 2024-08-07T17:52:18.0832434Z 4fe4cdcdfbd8: Waiting 2024-08-07T17:52:18.0833019Z 94a4e0b3f19a: Pulling fs layer 2024-08-07T17:52:18.0833615Z 4f4fb700ef54: Pulling fs layer 2024-08-07T17:52:18.0834207Z 2012c603f154: Pulling fs layer 2024-08-07T17:52:18.0835055Z 060890aa9610: Pulling fs layer 2024-08-07T17:52:18.0835635Z c1a64eb8ee12: Pulling fs layer 2024-08-07T17:52:18.0836480Z ed7686d06f1d: Pulling fs layer 2024-08-07T17:52:18.0837065Z 4f4fb700ef54: Waiting 2024-08-07T17:52:18.0837575Z 2012c603f154: Waiting 2024-08-07T17:52:18.0838601Z 5c40be014123: Pulling fs layer 2024-08-07T17:52:18.0839411Z 94a4e0b3f19a: Waiting 2024-08-07T17:52:18.0839872Z 7912f8c8e80d: Waiting 2024-08-07T17:52:18.0840388Z 95c1963010ed: Pulling fs layer 2024-08-07T17:52:18.0840991Z 5c40be014123: Waiting 2024-08-07T17:52:18.0841455Z 580500191368: Pulling fs layer 2024-08-07T17:52:18.0842004Z feb3f80c392d: Waiting 2024-08-07T17:52:18.0842304Z c1a64eb8ee12: Waiting 2024-08-07T17:52:18.0842631Z b826637ebc38: Pulling fs layer 2024-08-07T17:52:18.0842980Z 580500191368: Waiting 2024-08-07T17:52:18.0843283Z 859f9c7a6375: Pulling fs layer 2024-08-07T17:52:18.0843649Z b89ac1530c4a: Pulling fs layer 2024-08-07T17:52:18.0844119Z 890313244493: Waiting 2024-08-07T17:52:18.0844473Z 859f9c7a6375: Waiting 2024-08-07T17:52:18.0844802Z 4f10deed2e00: Pulling fs layer 2024-08-07T17:52:18.0845166Z 336420751f1d: Pulling fs layer 2024-08-07T17:52:18.0845871Z f7f49611427c: Pulling fs layer 2024-08-07T17:52:18.0846236Z 336420751f1d: Waiting 2024-08-07T17:52:18.0846683Z 628b460c253a: Pulling fs layer 2024-08-07T17:52:18.0847056Z 98e88ff10323: Pulling fs layer 2024-08-07T17:52:18.0847414Z d166ebb28213: Waiting 2024-08-07T17:52:18.0847732Z 628b460c253a: Waiting 2024-08-07T17:52:18.0848040Z 6abf825f7962: Pulling fs layer 2024-08-07T17:52:18.0848397Z 844414c41546: Pulling fs layer 2024-08-07T17:52:18.0848761Z b92a0d83e229: Pulling fs layer 2024-08-07T17:52:18.0849095Z 6abf825f7962: Waiting 2024-08-07T17:52:18.0849411Z 98e88ff10323: Waiting 2024-08-07T17:52:18.0849726Z 56e4340bc9e3: Pulling fs layer 2024-08-07T17:52:18.0850095Z 26f48d882588: Pulling fs layer 2024-08-07T17:52:18.0850441Z 844414c41546: Waiting 2024-08-07T17:52:18.0850753Z b6fe2821ba25: Pulling fs layer 2024-08-07T17:52:18.0851111Z b826637ebc38: Waiting 2024-08-07T17:52:18.0851428Z 56e4340bc9e3: Waiting 2024-08-07T17:52:18.0851747Z fae8722cca7f: Pulling fs layer 2024-08-07T17:52:18.0852110Z ed7686d06f1d: Waiting 2024-08-07T17:52:18.0852444Z 3c7c25c582fc: Pulling fs layer 2024-08-07T17:52:18.0852806Z 75a49c2f3f0a: Pulling fs layer 2024-08-07T17:52:18.0853342Z b32c97699ecd: Pulling fs layer 2024-08-07T17:52:18.0853727Z b926a8516817: Pulling fs layer 2024-08-07T17:52:18.0854078Z 1c5d35b9a760: Pulling fs layer 2024-08-07T17:52:18.0854432Z fae8722cca7f: Waiting 2024-08-07T17:52:18.0854749Z b926a8516817: Waiting 2024-08-07T17:52:18.0855049Z 1c5d35b9a760: Waiting 2024-08-07T17:52:18.0855367Z 3c7c25c582fc: Waiting 2024-08-07T17:52:18.3969424Z 224fe954d725: Verifying Checksum 2024-08-07T17:52:18.3969886Z 224fe954d725: Download complete 2024-08-07T17:52:18.4245663Z 7a2c55901189: Verifying Checksum 2024-08-07T17:52:18.4246145Z 7a2c55901189: Download complete 2024-08-07T17:52:18.4605090Z d527cbbb87e3: Verifying Checksum 2024-08-07T17:52:18.4605591Z d527cbbb87e3: Download complete 2024-08-07T17:52:18.5262064Z b57676e46aee: Verifying Checksum 2024-08-07T17:52:18.5262558Z b57676e46aee: Download complete 2024-08-07T17:52:18.6252017Z a41a8d1c11c8: Verifying Checksum 2024-08-07T17:52:18.6252637Z a41a8d1c11c8: Download complete 2024-08-07T17:52:18.6958990Z 75722010b82e: Verifying Checksum 2024-08-07T17:52:18.6959512Z 75722010b82e: Download complete 2024-08-07T17:52:18.7015919Z 0c1227890755: Download complete 2024-08-07T17:52:18.7740124Z d8d1234baab3: Verifying Checksum 2024-08-07T17:52:18.7740599Z d8d1234baab3: Download complete 2024-08-07T17:52:18.8569694Z ec1e7978c1fe: Verifying Checksum 2024-08-07T17:52:18.8570531Z ec1e7978c1fe: Download complete 2024-08-07T17:52:18.9670789Z 66b43372aa39: Download complete 2024-08-07T17:52:19.8242938Z 7a2c55901189: Pull complete 2024-08-07T17:52:20.1938681Z 224fe954d725: Pull complete 2024-08-07T17:52:21.3852097Z 75722010b82e: Pull complete 2024-08-07T17:52:21.4041814Z d527cbbb87e3: Pull complete 2024-08-07T17:52:21.4256307Z b57676e46aee: Pull complete 2024-08-07T17:52:21.4974485Z b6662193c745: Verifying Checksum 2024-08-07T17:52:21.4975321Z b6662193c745: Download complete 2024-08-07T17:52:21.5788630Z 5be2b638d110: Download complete 2024-08-07T17:52:21.6659996Z 71ca63790839: Verifying Checksum 2024-08-07T17:52:21.6661531Z 71ca63790839: Download complete 2024-08-07T17:52:21.7392058Z 8a74804dc4fa: Verifying Checksum 2024-08-07T17:52:21.7392561Z 8a74804dc4fa: Download complete 2024-08-07T17:52:22.7098070Z 3bacb5389b74: Verifying Checksum 2024-08-07T17:52:22.7098559Z 3bacb5389b74: Download complete 2024-08-07T17:52:22.7797301Z a8911a72541a: Download complete 2024-08-07T17:52:22.8593256Z 55d020986bb7: Verifying Checksum 2024-08-07T17:52:22.8593719Z 55d020986bb7: Download complete 2024-08-07T17:52:22.9700456Z 679e209a81f8: Verifying Checksum 2024-08-07T17:52:22.9700930Z 679e209a81f8: Download complete 2024-08-07T17:52:31.8209854Z a8c1e85b5e14: Verifying Checksum 2024-08-07T17:52:31.8210593Z a8c1e85b5e14: Download complete 2024-08-07T17:52:31.9094626Z 0d8ab4023e81: Verifying Checksum 2024-08-07T17:52:31.9095786Z 0d8ab4023e81: Download complete 2024-08-07T17:52:32.0085516Z bf191f5f5a0a: Download complete 2024-08-07T17:52:32.0852829Z 14653e4e245f: Verifying Checksum 2024-08-07T17:52:32.0853577Z 14653e4e245f: Download complete 2024-08-07T17:52:32.1603767Z 8bdbb000c39d: Verifying Checksum 2024-08-07T17:52:32.1604613Z 8bdbb000c39d: Download complete 2024-08-07T17:52:32.2383805Z 277383b63c07: Verifying Checksum 2024-08-07T17:52:32.2384338Z 277383b63c07: Download complete 2024-08-07T17:52:33.5618935Z 890313244493: Verifying Checksum 2024-08-07T17:52:33.5619401Z 890313244493: Download complete 2024-08-07T17:52:33.6450924Z f1e3cc0f57ee: Verifying Checksum 2024-08-07T17:52:33.6451690Z f1e3cc0f57ee: Download complete 2024-08-07T17:52:33.7351161Z c3cbae3fe054: Download complete 2024-08-07T17:52:33.8059467Z ccc148c4e759: Verifying Checksum 2024-08-07T17:52:33.8059925Z ccc148c4e759: Download complete 2024-08-07T17:52:33.8760874Z 7912f8c8e80d: Verifying Checksum 2024-08-07T17:52:33.9622086Z d166ebb28213: Verifying Checksum 2024-08-07T17:52:33.9623036Z d166ebb28213: Download complete 2024-08-07T17:52:38.2635206Z 63bf315f789a: Verifying Checksum 2024-08-07T17:52:38.2635694Z 63bf315f789a: Download complete 2024-08-07T17:52:38.3259002Z bdb818f7b2c8: Download complete 2024-08-07T17:52:38.3887002Z 89d8aea05b3a: Download complete 2024-08-07T17:52:38.8037510Z f1122e19f790: Verifying Checksum 2024-08-07T17:52:38.8038024Z f1122e19f790: Download complete 2024-08-07T17:52:38.8894256Z 13d6ce3185e9: Download complete 2024-08-07T17:52:38.9829221Z feb3f80c392d: Verifying Checksum 2024-08-07T17:52:38.9830048Z feb3f80c392d: Download complete 2024-08-07T17:52:39.2312152Z 4fe4cdcdfbd8: Verifying Checksum 2024-08-07T17:52:39.2312666Z 4fe4cdcdfbd8: Download complete 2024-08-07T17:52:39.3172677Z be10b99d8ac8: Verifying Checksum 2024-08-07T17:52:39.3173376Z be10b99d8ac8: Download complete 2024-08-07T17:52:39.4122672Z 5980a36dfe02: Download complete 2024-08-07T17:52:39.4936760Z 94a4e0b3f19a: Download complete 2024-08-07T17:52:39.5031757Z 4f4fb700ef54: Verifying Checksum 2024-08-07T17:52:39.5032327Z 4f4fb700ef54: Download complete 2024-08-07T17:52:39.5830720Z 2012c603f154: Verifying Checksum 2024-08-07T17:52:39.5831182Z 2012c603f154: Download complete 2024-08-07T17:52:39.6787753Z 060890aa9610: Verifying Checksum 2024-08-07T17:52:39.6788251Z 060890aa9610: Download complete 2024-08-07T17:52:40.1635653Z c1a64eb8ee12: Verifying Checksum 2024-08-07T17:52:40.1636497Z c1a64eb8ee12: Download complete 2024-08-07T17:52:40.2335447Z ed7686d06f1d: Verifying Checksum 2024-08-07T17:52:40.2335926Z ed7686d06f1d: Download complete 2024-08-07T17:52:40.3225474Z 5c40be014123: Verifying Checksum 2024-08-07T17:52:40.3226022Z 5c40be014123: Download complete 2024-08-07T17:52:40.4184790Z 95c1963010ed: Verifying Checksum 2024-08-07T17:52:40.4185503Z 95c1963010ed: Download complete 2024-08-07T17:52:40.4998386Z 580500191368: Verifying Checksum 2024-08-07T17:52:40.4998867Z 580500191368: Download complete 2024-08-07T17:52:44.0399125Z 7ed32bc8e469: Verifying Checksum 2024-08-07T17:52:44.0399603Z 7ed32bc8e469: Download complete 2024-08-07T17:52:44.1339387Z 859f9c7a6375: Download complete 2024-08-07T17:52:44.2128246Z b89ac1530c4a: Verifying Checksum 2024-08-07T17:52:44.2129042Z b89ac1530c4a: Download complete 2024-08-07T17:52:44.2943117Z 4f10deed2e00: Verifying Checksum 2024-08-07T17:52:44.2943976Z 4f10deed2e00: Download complete 2024-08-07T17:52:44.3791986Z 336420751f1d: Verifying Checksum 2024-08-07T17:52:44.3792711Z 336420751f1d: Download complete 2024-08-07T17:52:44.4563216Z f7f49611427c: Download complete 2024-08-07T17:52:44.5955538Z 628b460c253a: Verifying Checksum 2024-08-07T17:52:44.5956385Z 628b460c253a: Download complete 2024-08-07T17:52:44.6890009Z 98e88ff10323: Verifying Checksum 2024-08-07T17:52:44.6892762Z 98e88ff10323: Download complete 2024-08-07T17:52:44.7703634Z 6abf825f7962: Verifying Checksum 2024-08-07T17:52:44.7704418Z 6abf825f7962: Download complete 2024-08-07T17:52:44.8801193Z 844414c41546: Verifying Checksum 2024-08-07T17:52:44.8801887Z 844414c41546: Download complete 2024-08-07T17:52:44.9506567Z b92a0d83e229: Verifying Checksum 2024-08-07T17:52:44.9507368Z b92a0d83e229: Download complete 2024-08-07T17:52:45.1004690Z 56e4340bc9e3: Verifying Checksum 2024-08-07T17:52:45.1005592Z 56e4340bc9e3: Download complete 2024-08-07T17:52:45.1860039Z 26f48d882588: Verifying Checksum 2024-08-07T17:52:45.1860883Z 26f48d882588: Download complete 2024-08-07T17:52:45.7937966Z b6fe2821ba25: Verifying Checksum 2024-08-07T17:52:45.7938467Z b6fe2821ba25: Download complete 2024-08-07T17:52:45.8859079Z fae8722cca7f: Verifying Checksum 2024-08-07T17:52:48.8502134Z a8c1e85b5e14: Pull complete 2024-08-07T17:52:49.2980535Z a41a8d1c11c8: Pull complete 2024-08-07T17:52:49.7462549Z 0c1227890755: Pull complete 2024-08-07T17:52:50.1957930Z d8d1234baab3: Pull complete 2024-08-07T17:52:50.8707071Z d4fb7093f54f: Verifying Checksum 2024-08-07T17:52:50.8707889Z d4fb7093f54f: Download complete 2024-08-07T17:52:50.9719643Z 75a49c2f3f0a: Verifying Checksum 2024-08-07T17:52:50.9720285Z 75a49c2f3f0a: Download complete 2024-08-07T17:52:51.0440454Z b32c97699ecd: Verifying Checksum 2024-08-07T17:52:51.0441253Z b32c97699ecd: Download complete 2024-08-07T17:52:51.5561677Z b926a8516817: Verifying Checksum 2024-08-07T17:52:51.5562198Z b926a8516817: Download complete 2024-08-07T17:52:51.6278391Z 1c5d35b9a760: Verifying Checksum 2024-08-07T17:52:51.6278953Z 1c5d35b9a760: Download complete 2024-08-07T17:52:59.7201519Z b826637ebc38: Verifying Checksum 2024-08-07T17:52:59.7202317Z b826637ebc38: Download complete 2024-08-07T17:52:59.7322118Z 3c7c25c582fc: Verifying Checksum 2024-08-07T17:52:59.7322823Z 3c7c25c582fc: Download complete 2024-08-07T17:53:26.9382570Z 7ed32bc8e469: Pull complete 2024-08-07T17:53:27.4016845Z ec1e7978c1fe: Pull complete 2024-08-07T17:53:27.8573380Z 66b43372aa39: Pull complete 2024-08-07T17:53:38.3968126Z b6662193c745: Pull complete 2024-08-07T17:53:38.8170829Z 5be2b638d110: Pull complete 2024-08-07T17:53:39.2859526Z 71ca63790839: Pull complete 2024-08-07T17:53:39.7454915Z 8a74804dc4fa: Pull complete 2024-08-07T17:53:43.1612570Z 3bacb5389b74: Pull complete 2024-08-07T17:53:43.6464659Z a8911a72541a: Pull complete 2024-08-07T17:53:44.1097534Z 55d020986bb7: Pull complete 2024-08-07T17:53:44.5555472Z 679e209a81f8: Pull complete 2024-08-07T17:54:49.2243951Z d4fb7093f54f: Pull complete 2024-08-07T17:54:49.4965633Z 0d8ab4023e81: Pull complete 2024-08-07T17:54:49.7219415Z bf191f5f5a0a: Pull complete 2024-08-07T17:54:49.9807167Z 14653e4e245f: Pull complete 2024-08-07T17:54:50.2923464Z 8bdbb000c39d: Pull complete 2024-08-07T17:54:50.7313065Z 277383b63c07: Pull complete 2024-08-07T17:54:55.1020652Z 890313244493: Pull complete 2024-08-07T17:54:55.4603110Z f1e3cc0f57ee: Pull complete 2024-08-07T17:54:55.8515030Z c3cbae3fe054: Pull complete 2024-08-07T17:54:56.2068063Z ccc148c4e759: Pull complete 2024-08-07T17:54:56.4534461Z 7912f8c8e80d: Pull complete 2024-08-07T17:54:56.7624802Z d166ebb28213: Pull complete 2024-08-07T17:55:07.9387597Z 63bf315f789a: Pull complete 2024-08-07T17:55:08.2546918Z bdb818f7b2c8: Pull complete 2024-08-07T17:55:08.5608642Z 89d8aea05b3a: Pull complete 2024-08-07T17:55:09.9170132Z f1122e19f790: Pull complete 2024-08-07T17:55:10.2666873Z 13d6ce3185e9: Pull complete 2024-08-07T17:55:10.7233332Z feb3f80c392d: Pull complete 2024-08-07T17:55:11.5307840Z 4fe4cdcdfbd8: Pull complete 2024-08-07T17:55:11.9825060Z be10b99d8ac8: Pull complete 2024-08-07T17:55:12.8362137Z 5980a36dfe02: Pull complete 2024-08-07T17:55:13.2525784Z 94a4e0b3f19a: Pull complete 2024-08-07T17:55:13.6539671Z 4f4fb700ef54: Pull complete 2024-08-07T17:55:14.0522104Z 2012c603f154: Pull complete 2024-08-07T17:55:14.4382348Z 060890aa9610: Pull complete 2024-08-07T17:55:17.5585925Z c1a64eb8ee12: Pull complete 2024-08-07T17:55:17.9024581Z ed7686d06f1d: Pull complete 2024-08-07T17:55:18.3430315Z 5c40be014123: Pull complete 2024-08-07T17:55:19.0058989Z 95c1963010ed: Pull complete 2024-08-07T17:55:19.4496977Z 580500191368: Pull complete 2024-08-07T17:56:02.8668689Z b826637ebc38: Pull complete 2024-08-07T17:56:03.3251950Z 859f9c7a6375: Pull complete 2024-08-07T17:56:03.7792227Z b89ac1530c4a: Pull complete 2024-08-07T17:56:04.6979120Z 4f10deed2e00: Pull complete 2024-08-07T17:56:05.4176853Z 336420751f1d: Pull complete 2024-08-07T17:56:05.8724865Z f7f49611427c: Pull complete 2024-08-07T17:56:06.5077364Z 628b460c253a: Pull complete 2024-08-07T17:56:06.9556677Z 98e88ff10323: Pull complete 2024-08-07T17:56:07.4060053Z 6abf825f7962: Pull complete 2024-08-07T17:56:07.8520545Z 844414c41546: Pull complete 2024-08-07T17:56:08.2566192Z b92a0d83e229: Pull complete 2024-08-07T17:56:10.1047246Z 56e4340bc9e3: Pull complete 2024-08-07T17:56:10.5591438Z 26f48d882588: Pull complete 2024-08-07T17:56:13.4278665Z b6fe2821ba25: Pull complete 2024-08-07T17:56:13.8724158Z fae8722cca7f: Pull complete 2024-08-07T17:56:30.0011614Z 3c7c25c582fc: Pull complete 2024-08-07T17:56:30.4504159Z 75a49c2f3f0a: Pull complete 2024-08-07T17:56:30.9018544Z b32c97699ecd: Pull complete 2024-08-07T17:56:32.1851834Z b926a8516817: Pull complete 2024-08-07T17:56:32.2057248Z 1c5d35b9a760: Pull complete 2024-08-07T17:56:32.2980723Z Digest: sha256:00f47b036f588ca5ef8866f8635fabba5a95cdf9ff1adae7d2a674ef1d4076e9 2024-08-07T17:56:32.3013226Z Status: Downloaded newer image for 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:56:32.3048065Z 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:56:32.3103109Z ##[group]Run echo "IN_ARC_RUNNER=$([ -f /.inarc ] && echo true || echo false)" >> "$GITHUB_OUTPUT" 2024-08-07T17:56:32.3104046Z echo "IN_ARC_RUNNER=$([ -f /.inarc ] && echo true || echo false)" >> "$GITHUB_OUTPUT" 2024-08-07T17:56:32.3114075Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:56:32.3114545Z env: 2024-08-07T17:56:32.3114841Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:56:32.3115188Z ##[endgroup] 2024-08-07T17:56:32.3418071Z ##[group]Run pytorch/test-infra/.github/actions/setup-nvidia@main 2024-08-07T17:56:32.3418646Z with: 2024-08-07T17:56:32.3418958Z driver-version: 550.54.15 2024-08-07T17:56:32.3419316Z env: 2024-08-07T17:56:32.3419587Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:56:32.3419963Z ##[endgroup] 2024-08-07T17:56:32.3476085Z ##[group]Run nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482 2024-08-07T17:56:32.3476644Z with: 2024-08-07T17:56:32.3476940Z timeout_minutes: 10 2024-08-07T17:56:32.3477265Z max_attempts: 3 2024-08-07T17:56:32.3508403Z command: # Is it disgusting to have a full shell script here in this github action? Sure # But is it the best way to make it so that this action relies on nothing else? Absolutely set -eou pipefail DISTRIBUTION=$(. /etc/os-release;echo $ID$VERSION_ID) DRIVER_FN="NVIDIA-Linux-x86_64-${DRIVER_VERSION}.run" install_nvidia_docker2_amzn2() { ( set -x # Needed for yum-config-manager sudo yum install -y yum-utils if [[ "${DISTRIBUTION}" == "amzn2023" ]] ; then YUM_REPO_URL="https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo" else # Amazon Linux 2 YUM_REPO_URL="https://nvidia.github.io/nvidia-docker/${DISTRIBUTION}/nvidia-docker.repo" fi sudo yum-config-manager --add-repo "${YUM_REPO_URL}" sudo yum install -y nvidia-docker2 sudo systemctl restart docker ) } install_nvidia_docker2_ubuntu20() { ( set -x # Install nvidia-driver package if not installed status="$(dpkg-query -W --showformat='${db:Status-Status}' nvidia-docker2 2>&1)" if [ ! $? = 0 ] || [ ! "$status" = installed ]; then sudo apt-get install -y nvidia-docker2 sudo systemctl restart docker fi ) } pre_install_nvidia_driver_amzn2() { ( # Purge any nvidia driver installed from RHEL repo sudo yum remove -y nvidia-driver-latest-dkms ) } install_nvidia_driver_common() { ( # Try to gather more information about the runner and its existing NVIDIA driver if any echo "Before installing NVIDIA driver" lspci lsmod modinfo nvidia || true HAS_NVIDIA_DRIVER=0 # Check if NVIDIA driver has already been installed if [ -x "$(command -v nvidia-smi)" ]; then set +e # The driver exists, check its version next. Also check only the first GPU if there are more than one of them # so that the same driver version is not print over multiple lines INSTALLED_DRIVER_VERSION=$(nvidia-smi --query-gpu=driver_version --format=csv,noheader --id=0) NVIDIA_SMI_STATUS=$? if [ "$NVIDIA_SMI_STATUS" -ne 0 ] && [ "$NVIDIA_SMI_STATUS" -ne 14 ]; then echo "Failed to get NVIDIA driver version ($INSTALLED_DRIVER_VERSION). Continuing" elif [ "$INSTALLED_DRIVER_VERSION" != "$DRIVER_VERSION" ]; then echo "NVIDIA driver ($INSTALLED_DRIVER_VERSION) has been installed, but we expect to have $DRIVER_VERSION instead. Continuing" else HAS_NVIDIA_DRIVER=1 echo "NVIDIA driver ($INSTALLED_DRIVER_VERSION) has already been installed. Skipping NVIDIA driver installation" fi set -e fi if [ "$HAS_NVIDIA_DRIVER" -eq 0 ]; then # CAUTION: this may need to be updated in future if [ "${DISTRIBUTION}" != ubuntu20.04 ]; then sudo yum groupinstall -y "Development Tools" # ensure our kernel install is the same as our underlying kernel, # groupinstall "Development Tools" has a habit of mismatching kernel headers sudo yum install -y "kernel-devel-uname-r == $(uname -r)" sudo modprobe backlight fi sudo curl -fsL -o /tmp/nvidia_driver "https://s3.amazonaws.com/ossci-linux/nvidia_driver/$DRIVER_FN" set +e sudo /bin/bash /tmp/nvidia_driver -s --no-drm NVIDIA_INSTALLATION_STATUS=$? RESET_GPU=0 if [ "$NVIDIA_INSTALLATION_STATUS" -ne 0 ]; then sudo cat /var/log/nvidia-installer.log # Fail to install NVIDIA driver, try to reset the GPU RESET_GPU=1 elif [ -x "$(command -v nvidia-smi)" ]; then # Check again if nvidia-smi works even if the driver installation completes successfully INSTALLED_DRIVER_VERSION=$(nvidia-smi --query-gpu=driver_version --format=csv,noheader --id=0) NVIDIA_SMI_STATUS=$? if [ "$NVIDIA_SMI_STATUS" -ne 0 ] && [ "$NVIDIA_SMI_STATUS" -ne 14 ]; then RESET_GPU=1 fi fi if [ "$RESET_GPU" -eq 1 ]; then NVIDIA_DEVICES=$(lspci -D | grep -i NVIDIA | cut -d' ' -f1) # The GPU can get stuck in a failure state if somehow the test crashs the GPU microcode. When this # happens, we'll try to reset all NVIDIA devices https://github.com/pytorch/pytorch/issues/88388 for PCI_ID in $NVIDIA_DEVICES; do DEVICE_ENABLED=$(cat /sys/bus/pci/devices/$PCI_ID/enable) echo "Reseting $PCI_ID (enabled state: $DEVICE_ENABLED)" # This requires sudo permission of course echo "1" | sudo tee /sys/bus/pci/devices/$PCI_ID/reset sleep 1 done fi sudo rm -fv /tmp/nvidia_driver set -e fi ) } post_install_nvidia_driver_common() { ( sudo modprobe nvidia || true echo "After installing NVIDIA driver" lspci lsmod modinfo nvidia || true ( set +e nvidia-smi # NB: Annoyingly, nvidia-smi command returns successfully with return code 0 even in # the case where the driver has already crashed as it still can get the driver version # and some basic information like the bus ID. However, the rest of the information # would be missing (ERR!), for example: # # +-----------------------------------------------------------------------------+ # | NVIDIA-SMI 525.89.02 Driver Version: 525.89.02 CUDA Version: 12.0 | # |-------------------------------+----------------------+----------------------+ # | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | # | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | # | | | MIG M. | # |===============================+======================+======================| # | 0 ERR! Off | 00000000:00:1E.0 Off | ERR! | # |ERR! ERR! ERR! ERR! / ERR! | 4184MiB / 23028MiB | ERR! Default | # | | | ERR! | # +-------------------------------+----------------------+----------------------+ # # +-----------------------------------------------------------------------------+ # | Processes: | # | GPU GI CI PID Type Process name GPU Memory | # | ID ID Usage | # |=============================================================================| # +-----------------------------------------------------------------------------+ # # This should be reported as a failure instead as it will guarantee to fail when # Docker tries to run with --gpus all # # So, the correct check here is to query one of the missing piece of info like # GPU name, so that the command can fail accordingly nvidia-smi --query-gpu=gpu_name --format=csv,noheader --id=0 NVIDIA_SMI_STATUS=$? # Allowable exit statuses for nvidia-smi, see: https://github.com/NVIDIA/gpu-operator/issues/285 if [ "$NVIDIA_SMI_STATUS" -eq 0 ] || [ "$NVIDIA_SMI_STATUS" -eq 14 ]; then echo "INFO: Ignoring allowed status ${NVIDIA_SMI_STATUS}" else echo "ERROR: nvidia-smi exited with unresolved status ${NVIDIA_SMI_STATUS}" exit ${NVIDIA_SMI_STATUS} fi set -e ) ) } install_nvidia_driver_amzn2() { ( set -x pre_install_nvidia_driver_amzn2 install_nvidia_driver_common post_install_nvidia_driver_common ) } install_nvidia_driver_ubuntu20() { ( set -x install_nvidia_driver_common post_install_nvidia_driver_common ) } echo "== Installing nvidia driver ${DRIVER_FN} ==" case "${DISTRIBUTION}" in amzn*) install_nvidia_driver_amzn2 ;; ubuntu20.04) install_nvidia_driver_ubuntu20 ;; *) echo "ERROR: Unknown distribution ${DISTRIBUTION}" exit 1 ;; esac # Install container toolkit based on distribution echo "== Installing nvidia container toolkit for ${DISTRIBUTION} ==" case "${DISTRIBUTION}" in amzn*) install_nvidia_docker2_amzn2 ;; ubuntu20.04) install_nvidia_docker2_ubuntu20 ;; *) echo "ERROR: Unknown distribution ${DISTRIBUTION}" exit 1 ;; esac echo "GPU_FLAG=--gpus all -e NVIDIA_DRIVER_CAPABILITIES=all" >> "${GITHUB_ENV}" # Fix https://github.com/NVIDIA/nvidia-docker/issues/1648 on runners with # more than one GPUs. This just needs to be run once. The command fails # on subsequent runs and complains that the mode is already on, but that's # ok sudo nvidia-persistenced || true # This should show persistence mode ON nvidia-smi 2024-08-07T17:56:32.3538463Z retry_wait_seconds: 10 2024-08-07T17:56:32.3538826Z polling_interval_seconds: 1 2024-08-07T17:56:32.3539183Z warning_on_retry: true 2024-08-07T17:56:32.3539533Z continue_on_error: false 2024-08-07T17:56:32.3539879Z env: 2024-08-07T17:56:32.3540145Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:56:32.3540505Z DRIVER_VERSION: 550.54.15 2024-08-07T17:56:32.3540834Z ##[endgroup] 2024-08-07T17:56:32.4549708Z == Installing nvidia driver NVIDIA-Linux-x86_64-550.54.15.run == 2024-08-07T17:56:32.4551888Z + pre_install_nvidia_driver_amzn2 2024-08-07T17:56:32.4554280Z + sudo yum remove -y nvidia-driver-latest-dkms 2024-08-07T17:56:32.8135222Z No match for argument: nvidia-driver-latest-dkms 2024-08-07T17:56:32.8138926Z No packages marked for removal. 2024-08-07T17:56:32.8216287Z Dependencies resolved. 2024-08-07T17:56:32.8230591Z Nothing to do. 2024-08-07T17:56:32.8232868Z Complete! 2024-08-07T17:56:32.8641712Z + install_nvidia_driver_common 2024-08-07T17:56:32.8645411Z + echo 'Before installing NVIDIA driver' 2024-08-07T17:56:32.8647969Z Before installing NVIDIA driver 2024-08-07T17:56:32.8649345Z + lspci 2024-08-07T17:56:32.8826729Z 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 2024-08-07T17:56:32.8827503Z 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 2024-08-07T17:56:32.8828563Z 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 2024-08-07T17:56:32.8829220Z 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 01) 2024-08-07T17:56:32.8829859Z 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 2024-08-07T17:56:32.8830513Z 00:03.0 Ethernet controller: Amazon.com, Inc. Elastic Network Adapter (ENA) 2024-08-07T17:56:32.8831335Z 00:1e.0 VGA compatible controller: NVIDIA Corporation GM204GL [Tesla M60] (rev a1) 2024-08-07T17:56:32.8832045Z 00:1f.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01) 2024-08-07T17:56:32.8832588Z + lsmod 2024-08-07T17:56:32.8874500Z Module Size Used by 2024-08-07T17:56:32.8874919Z veth 36864 0 2024-08-07T17:56:32.8875306Z nvidia_modeset 1351680 0 2024-08-07T17:56:32.8875711Z video 65536 1 nvidia_modeset 2024-08-07T17:56:32.8876131Z wmi 36864 1 video 2024-08-07T17:56:32.8876560Z nvidia_uvm 4706304 0 2024-08-07T17:56:32.8877008Z nvidia 54071296 7 nvidia_uvm,nvidia_modeset 2024-08-07T17:56:32.8877465Z drm 602112 1 nvidia 2024-08-07T17:56:32.8877879Z drm_panel_orientation_quirks 28672 1 drm 2024-08-07T17:56:32.8878387Z backlight 24576 3 video,drm,nvidia_modeset 2024-08-07T17:56:32.8879112Z i2c_core 106496 2 nvidia,drm 2024-08-07T17:56:32.8879511Z xt_conntrack 16384 1 2024-08-07T17:56:32.8879876Z nft_chain_nat 16384 3 2024-08-07T17:56:32.8880237Z xt_MASQUERADE 20480 1 2024-08-07T17:56:32.8880628Z nf_nat 57344 2 nft_chain_nat,xt_MASQUERADE 2024-08-07T17:56:32.8881084Z nf_conntrack_netlink 57344 0 2024-08-07T17:56:32.8881629Z nf_conntrack 184320 4 xt_conntrack,nf_nat,nf_conntrack_netlink,xt_MASQUERADE 2024-08-07T17:56:32.8882187Z nf_defrag_ipv6 24576 1 nf_conntrack 2024-08-07T17:56:32.8882620Z nf_defrag_ipv4 16384 1 nf_conntrack 2024-08-07T17:56:32.8883044Z xfrm_user 57344 1 2024-08-07T17:56:32.8883394Z xfrm_algo 16384 1 xfrm_user 2024-08-07T17:56:32.8883787Z xt_addrtype 16384 2 2024-08-07T17:56:32.8884150Z nft_compat 20480 4 2024-08-07T17:56:32.8884544Z nf_tables 307200 57 nft_compat,nft_chain_nat 2024-08-07T17:56:32.8885101Z nfnetlink 20480 4 nft_compat,nf_conntrack_netlink,nf_tables 2024-08-07T17:56:32.8885609Z br_netfilter 36864 0 2024-08-07T17:56:32.8885974Z bridge 307200 1 br_netfilter 2024-08-07T17:56:32.8886386Z stp 16384 1 bridge 2024-08-07T17:56:32.8886778Z llc 16384 2 bridge,stp 2024-08-07T17:56:32.8887165Z overlay 167936 0 2024-08-07T17:56:32.8887519Z tls 114688 0 2024-08-07T17:56:32.8887866Z nls_ascii 16384 1 2024-08-07T17:56:32.8888199Z nls_cp437 20480 1 2024-08-07T17:56:32.8888549Z ata_piix 45056 0 2024-08-07T17:56:32.8888916Z vfat 24576 1 2024-08-07T17:56:32.8889247Z sunrpc 692224 1 2024-08-07T17:56:32.8889608Z libata 401408 1 ata_piix 2024-08-07T17:56:32.8890001Z fat 86016 1 vfat 2024-08-07T17:56:32.8890358Z scsi_mod 290816 1 libata 2024-08-07T17:56:32.8890729Z ena 167936 0 2024-08-07T17:56:32.8891073Z ptp 36864 1 ena 2024-08-07T17:56:32.8891477Z scsi_common 16384 2 scsi_mod,libata 2024-08-07T17:56:32.8891915Z pps_core 24576 1 ptp 2024-08-07T17:56:32.8892270Z ghash_clmulni_intel 16384 0 2024-08-07T17:56:32.8892641Z aesni_intel 393216 0 2024-08-07T17:56:32.8892999Z i8042 45056 0 2024-08-07T17:56:32.8893337Z serio 28672 3 i8042 2024-08-07T17:56:32.8893736Z crypto_simd 16384 1 aesni_intel 2024-08-07T17:56:32.8894214Z cryptd 28672 2 crypto_simd,ghash_clmulni_intel 2024-08-07T17:56:32.8895734Z button 24576 0 2024-08-07T17:56:32.8896403Z sch_fq_codel 20480 9 2024-08-07T17:56:32.8897040Z dm_mod 188416 0 2024-08-07T17:56:32.8897657Z fuse 163840 1 2024-08-07T17:56:32.8898361Z configfs 57344 1 2024-08-07T17:56:32.8898861Z loop 36864 0 2024-08-07T17:56:32.8899202Z dax 45056 1 dm_mod 2024-08-07T17:56:32.8899582Z dmi_sysfs 20480 0 2024-08-07T17:56:32.8899934Z crc32_pclmul 16384 0 2024-08-07T17:56:32.8900280Z crc32c_intel 24576 0 2024-08-07T17:56:32.8900631Z + modinfo nvidia 2024-08-07T17:56:32.8901661Z filename: /lib/modules/6.1.94-99.176.amzn2023.x86_64/kernel/drivers/video/nvidia.ko 2024-08-07T17:56:32.8902292Z alias: char-major-195-* 2024-08-07T17:56:32.8902651Z version: 550.54.15 2024-08-07T17:56:32.8902995Z supported: external 2024-08-07T17:56:32.8903315Z license: NVIDIA 2024-08-07T17:56:32.8903684Z firmware: nvidia/550.54.15/gsp_tu10x.bin 2024-08-07T17:56:32.8904343Z firmware: nvidia/550.54.15/gsp_ga10x.bin 2024-08-07T17:56:32.8905204Z srcversion: 833721318DA517F0C2FEC97 2024-08-07T17:56:32.8906080Z alias: pci:v000010DEd*sv*sd*bc06sc80i00* 2024-08-07T17:56:32.8906961Z alias: pci:v000010DEd*sv*sd*bc03sc02i00* 2024-08-07T17:56:32.8907830Z alias: pci:v000010DEd*sv*sd*bc03sc00i00* 2024-08-07T17:56:32.8908437Z depends: i2c-core,drm 2024-08-07T17:56:32.8908800Z retpoline: Y 2024-08-07T17:56:32.8909092Z name: nvidia 2024-08-07T17:56:32.8909580Z vermagic: 6.1.94-99.176.amzn2023.x86_64 SMP preempt mod_unload modversions 2024-08-07T17:56:32.8910213Z parm: NvSwitchRegDwords:NvSwitch regkey (charp) 2024-08-07T17:56:32.8910798Z parm: NvSwitchBlacklist:NvSwitchBlacklist=uuid[,uuid...] (charp) 2024-08-07T17:56:32.8911381Z parm: NVreg_ResmanDebugLevel:int 2024-08-07T17:56:32.8911800Z parm: NVreg_RmLogonRC:int 2024-08-07T17:56:32.8912194Z parm: NVreg_ModifyDeviceFiles:int 2024-08-07T17:56:32.8912627Z parm: NVreg_DeviceFileUID:int 2024-08-07T17:56:32.8913042Z parm: NVreg_DeviceFileGID:int 2024-08-07T17:56:32.8913439Z parm: NVreg_DeviceFileMode:int 2024-08-07T17:56:32.8913934Z parm: NVreg_InitializeSystemMemoryAllocations:int 2024-08-07T17:56:32.8914456Z parm: NVreg_UsePageAttributeTable:int 2024-08-07T17:56:32.8914893Z parm: NVreg_EnablePCIeGen3:int 2024-08-07T17:56:32.8915308Z parm: NVreg_EnableMSI:int 2024-08-07T17:56:32.8915707Z parm: NVreg_TCEBypassMode:int 2024-08-07T17:56:32.8916124Z parm: NVreg_EnableStreamMemOPs:int 2024-08-07T17:56:32.8916795Z parm: NVreg_RestrictProfilingToAdminUsers:int 2024-08-07T17:56:32.8917338Z parm: NVreg_PreserveVideoMemoryAllocations:int 2024-08-07T17:56:32.8917838Z parm: NVreg_EnableS0ixPowerManagement:int 2024-08-07T17:56:32.8918407Z parm: NVreg_S0ixPowerManagementVideoMemoryThreshold:int 2024-08-07T17:56:32.8918971Z parm: NVreg_DynamicPowerManagement:int 2024-08-07T17:56:32.8919523Z parm: NVreg_DynamicPowerManagementVideoMemoryThreshold:int 2024-08-07T17:56:32.8920074Z parm: NVreg_EnableGpuFirmware:int 2024-08-07T17:56:32.8920537Z parm: NVreg_EnableGpuFirmwareLogs:int 2024-08-07T17:56:32.8921016Z parm: NVreg_OpenRmEnableUnsupportedGpus:int 2024-08-07T17:56:32.8921532Z parm: NVreg_EnableUserNUMAManagement:int 2024-08-07T17:56:32.8921989Z parm: NVreg_MemoryPoolSize:int 2024-08-07T17:56:32.8922412Z parm: NVreg_KMallocHeapMaxSize:int 2024-08-07T17:56:32.8922882Z parm: NVreg_VMallocHeapMaxSize:int 2024-08-07T17:56:32.8923335Z parm: NVreg_IgnoreMMIOCheck:int 2024-08-07T17:56:32.8923742Z parm: NVreg_NvLinkDisable:int 2024-08-07T17:56:32.8924216Z parm: NVreg_EnablePCIERelaxedOrderingMode:int 2024-08-07T17:56:32.8924712Z parm: NVreg_RegisterPCIDriver:int 2024-08-07T17:56:32.8925368Z parm: NVreg_EnableResizableBar:int 2024-08-07T17:56:32.8925829Z parm: NVreg_EnableDbgBreakpoint:int 2024-08-07T17:56:32.8926300Z parm: NVreg_EnableNonblockingOpen:int 2024-08-07T17:56:32.8926747Z parm: NVreg_RegistryDwords:charp 2024-08-07T17:56:32.8927214Z parm: NVreg_RegistryDwordsPerDevice:charp 2024-08-07T17:56:32.8927675Z parm: NVreg_RmMsg:charp 2024-08-07T17:56:32.8928054Z parm: NVreg_GpuBlacklist:charp 2024-08-07T17:56:32.8928504Z parm: NVreg_TemporaryFilePath:charp 2024-08-07T17:56:32.8928949Z parm: NVreg_ExcludedGpus:charp 2024-08-07T17:56:32.8929462Z parm: NVreg_DmaRemapPeerMmio:int 2024-08-07T17:56:32.8929929Z parm: NVreg_RmNvlinkBandwidth:charp 2024-08-07T17:56:32.8930374Z parm: NVreg_ImexChannelCount:int 2024-08-07T17:56:32.8930793Z parm: rm_firmware_active:charp 2024-08-07T17:56:32.8931191Z + HAS_NVIDIA_DRIVER=0 2024-08-07T17:56:32.8931555Z ++ command -v nvidia-smi 2024-08-07T17:56:32.8931892Z + '[' -x /usr/bin/nvidia-smi ']' 2024-08-07T17:56:32.8932250Z + set +e 2024-08-07T17:56:32.8932676Z ++ nvidia-smi --query-gpu=driver_version --format=csv,noheader --id=0 2024-08-07T17:56:32.9206557Z + INSTALLED_DRIVER_VERSION=550.54.15 2024-08-07T17:56:32.9207210Z + NVIDIA_SMI_STATUS=0 2024-08-07T17:56:32.9207553Z + '[' 0 -ne 0 ']' 2024-08-07T17:56:32.9207834Z + '[' 550.54.15 '!=' 550.54.15 ']' 2024-08-07T17:56:32.9208208Z + HAS_NVIDIA_DRIVER=1 2024-08-07T17:56:32.9208806Z + echo 'NVIDIA driver (550.54.15) has already been installed. Skipping NVIDIA driver installation' 2024-08-07T17:56:32.9209432Z + set -e 2024-08-07T17:56:32.9209750Z + '[' 1 -eq 0 ']' 2024-08-07T17:56:32.9210270Z NVIDIA driver (550.54.15) has already been installed. Skipping NVIDIA driver installation 2024-08-07T17:56:32.9210871Z + post_install_nvidia_driver_common 2024-08-07T17:56:32.9214566Z + sudo modprobe nvidia 2024-08-07T17:56:33.0617511Z + echo 'After installing NVIDIA driver' 2024-08-07T17:56:33.0618030Z + lspci 2024-08-07T17:56:33.0618405Z After installing NVIDIA driver 2024-08-07T17:56:33.0798285Z 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 2024-08-07T17:56:33.0799771Z 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 2024-08-07T17:56:33.0801106Z 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 2024-08-07T17:56:33.0802222Z 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 01) 2024-08-07T17:56:33.0802791Z 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 2024-08-07T17:56:33.0803577Z 00:03.0 Ethernet controller: Amazon.com, Inc. Elastic Network Adapter (ENA) 2024-08-07T17:56:33.0804306Z 00:1e.0 VGA compatible controller: NVIDIA Corporation GM204GL [Tesla M60] (rev a1) 2024-08-07T17:56:33.0805032Z 00:1f.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01) 2024-08-07T17:56:33.0805572Z + lsmod 2024-08-07T17:56:33.0830514Z Module Size Used by 2024-08-07T17:56:33.0831127Z veth 36864 0 2024-08-07T17:56:33.0831837Z nvidia_modeset 1351680 0 2024-08-07T17:56:33.0832620Z video 65536 1 nvidia_modeset 2024-08-07T17:56:33.0833384Z wmi 36864 1 video 2024-08-07T17:56:33.0834126Z nvidia_uvm 4706304 0 2024-08-07T17:56:33.0834819Z nvidia 54071296 7 nvidia_uvm,nvidia_modeset 2024-08-07T17:56:33.0835271Z drm 602112 1 nvidia 2024-08-07T17:56:33.0835689Z drm_panel_orientation_quirks 28672 1 drm 2024-08-07T17:56:33.0836236Z backlight 24576 3 video,drm,nvidia_modeset 2024-08-07T17:56:33.0836796Z i2c_core 106496 2 nvidia,drm 2024-08-07T17:56:33.0837205Z xt_conntrack 16384 1 2024-08-07T17:56:33.0837549Z nft_chain_nat 16384 3 2024-08-07T17:56:33.0837916Z xt_MASQUERADE 20480 1 2024-08-07T17:56:33.0838327Z nf_nat 57344 2 nft_chain_nat,xt_MASQUERADE 2024-08-07T17:56:33.0839041Z nf_conntrack_netlink 57344 0 2024-08-07T17:56:33.0839584Z nf_conntrack 184320 4 xt_conntrack,nf_nat,nf_conntrack_netlink,xt_MASQUERADE 2024-08-07T17:56:33.0840172Z nf_defrag_ipv6 24576 1 nf_conntrack 2024-08-07T17:56:33.0840583Z nf_defrag_ipv4 16384 1 nf_conntrack 2024-08-07T17:56:33.0840990Z xfrm_user 57344 1 2024-08-07T17:56:33.0841361Z xfrm_algo 16384 1 xfrm_user 2024-08-07T17:56:33.0841740Z xt_addrtype 16384 2 2024-08-07T17:56:33.0842098Z nft_compat 20480 4 2024-08-07T17:56:33.0842517Z nf_tables 307200 57 nft_compat,nft_chain_nat 2024-08-07T17:56:33.0843231Z nfnetlink 20480 4 nft_compat,nf_conntrack_netlink,nf_tables 2024-08-07T17:56:33.0843764Z br_netfilter 36864 0 2024-08-07T17:56:33.0844151Z bridge 307200 1 br_netfilter 2024-08-07T17:56:33.0844539Z stp 16384 1 bridge 2024-08-07T17:56:33.0844999Z llc 16384 2 bridge,stp 2024-08-07T17:56:33.0845409Z overlay 167936 0 2024-08-07T17:56:33.0845745Z tls 114688 0 2024-08-07T17:56:33.0846098Z nls_ascii 16384 1 2024-08-07T17:56:33.0846455Z nls_cp437 20480 1 2024-08-07T17:56:33.0846795Z ata_piix 45056 0 2024-08-07T17:56:33.0847147Z vfat 24576 1 2024-08-07T17:56:33.0847497Z sunrpc 692224 1 2024-08-07T17:56:33.0847844Z libata 401408 1 ata_piix 2024-08-07T17:56:33.0848237Z fat 86016 1 vfat 2024-08-07T17:56:33.0848615Z scsi_mod 290816 1 libata 2024-08-07T17:56:33.0848976Z ena 167936 0 2024-08-07T17:56:33.0849348Z ptp 36864 1 ena 2024-08-07T17:56:33.0849979Z scsi_common 16384 2 scsi_mod,libata 2024-08-07T17:56:33.0850756Z pps_core 24576 1 ptp 2024-08-07T17:56:33.0851470Z ghash_clmulni_intel 16384 0 2024-08-07T17:56:33.0852195Z aesni_intel 393216 0 2024-08-07T17:56:33.0852660Z i8042 45056 0 2024-08-07T17:56:33.0853013Z serio 28672 3 i8042 2024-08-07T17:56:33.0853413Z crypto_simd 16384 1 aesni_intel 2024-08-07T17:56:33.0853875Z cryptd 28672 2 crypto_simd,ghash_clmulni_intel 2024-08-07T17:56:33.0854338Z button 24576 0 2024-08-07T17:56:33.0854689Z sch_fq_codel 20480 9 2024-08-07T17:56:33.0855022Z dm_mod 188416 0 2024-08-07T17:56:33.0855367Z fuse 163840 1 2024-08-07T17:56:33.0855715Z configfs 57344 1 2024-08-07T17:56:33.0856044Z loop 36864 0 2024-08-07T17:56:33.0856403Z dax 45056 1 dm_mod 2024-08-07T17:56:33.0856789Z dmi_sysfs 20480 0 2024-08-07T17:56:33.0857120Z crc32_pclmul 16384 0 2024-08-07T17:56:33.0857473Z crc32c_intel 24576 0 2024-08-07T17:56:33.0857859Z + modinfo nvidia 2024-08-07T17:56:33.0858349Z filename: /lib/modules/6.1.94-99.176.amzn2023.x86_64/kernel/drivers/video/nvidia.ko 2024-08-07T17:56:33.0858949Z alias: char-major-195-* 2024-08-07T17:56:33.0859321Z version: 550.54.15 2024-08-07T17:56:33.0859651Z supported: external 2024-08-07T17:56:33.0859994Z license: NVIDIA 2024-08-07T17:56:33.0860361Z firmware: nvidia/550.54.15/gsp_tu10x.bin 2024-08-07T17:56:33.0860804Z firmware: nvidia/550.54.15/gsp_ga10x.bin 2024-08-07T17:56:33.0861235Z srcversion: 833721318DA517F0C2FEC97 2024-08-07T17:56:33.0861665Z alias: pci:v000010DEd*sv*sd*bc06sc80i00* 2024-08-07T17:56:33.0862110Z alias: pci:v000010DEd*sv*sd*bc03sc02i00* 2024-08-07T17:56:33.0862574Z alias: pci:v000010DEd*sv*sd*bc03sc00i00* 2024-08-07T17:56:33.0863007Z depends: i2c-core,drm 2024-08-07T17:56:33.0863346Z retpoline: Y 2024-08-07T17:56:33.0863652Z name: nvidia 2024-08-07T17:56:33.0864125Z vermagic: 6.1.94-99.176.amzn2023.x86_64 SMP preempt mod_unload modversions 2024-08-07T17:56:33.0864914Z parm: NvSwitchRegDwords:NvSwitch regkey (charp) 2024-08-07T17:56:33.0865513Z parm: NvSwitchBlacklist:NvSwitchBlacklist=uuid[,uuid...] (charp) 2024-08-07T17:56:33.0866076Z parm: NVreg_ResmanDebugLevel:int 2024-08-07T17:56:33.0866483Z parm: NVreg_RmLogonRC:int 2024-08-07T17:56:33.0866903Z parm: NVreg_ModifyDeviceFiles:int 2024-08-07T17:56:33.0867342Z parm: NVreg_DeviceFileUID:int 2024-08-07T17:56:33.0867743Z parm: NVreg_DeviceFileGID:int 2024-08-07T17:56:33.0868159Z parm: NVreg_DeviceFileMode:int 2024-08-07T17:56:33.0868651Z parm: NVreg_InitializeSystemMemoryAllocations:int 2024-08-07T17:56:33.0869245Z parm: NVreg_UsePageAttributeTable:int 2024-08-07T17:56:33.0869717Z parm: NVreg_EnablePCIeGen3:int 2024-08-07T17:56:33.0870132Z parm: NVreg_EnableMSI:int 2024-08-07T17:56:33.0870517Z parm: NVreg_TCEBypassMode:int 2024-08-07T17:56:33.0870948Z parm: NVreg_EnableStreamMemOPs:int 2024-08-07T17:56:33.0871456Z parm: NVreg_RestrictProfilingToAdminUsers:int 2024-08-07T17:56:33.0871975Z parm: NVreg_PreserveVideoMemoryAllocations:int 2024-08-07T17:56:33.0872488Z parm: NVreg_EnableS0ixPowerManagement:int 2024-08-07T17:56:33.0873045Z parm: NVreg_S0ixPowerManagementVideoMemoryThreshold:int 2024-08-07T17:56:33.0873576Z parm: NVreg_DynamicPowerManagement:int 2024-08-07T17:56:33.0874140Z parm: NVreg_DynamicPowerManagementVideoMemoryThreshold:int 2024-08-07T17:56:33.0874696Z parm: NVreg_EnableGpuFirmware:int 2024-08-07T17:56:33.0875138Z parm: NVreg_EnableGpuFirmwareLogs:int 2024-08-07T17:56:33.0875651Z parm: NVreg_OpenRmEnableUnsupportedGpus:int 2024-08-07T17:56:33.0876153Z parm: NVreg_EnableUserNUMAManagement:int 2024-08-07T17:56:33.0876599Z parm: NVreg_MemoryPoolSize:int 2024-08-07T17:56:33.0877043Z parm: NVreg_KMallocHeapMaxSize:int 2024-08-07T17:56:33.0877495Z parm: NVreg_VMallocHeapMaxSize:int 2024-08-07T17:56:33.0877933Z parm: NVreg_IgnoreMMIOCheck:int 2024-08-07T17:56:33.0878358Z parm: NVreg_NvLinkDisable:int 2024-08-07T17:56:33.0878824Z parm: NVreg_EnablePCIERelaxedOrderingMode:int 2024-08-07T17:56:33.0879298Z parm: NVreg_RegisterPCIDriver:int 2024-08-07T17:56:33.0879749Z parm: NVreg_EnableResizableBar:int 2024-08-07T17:56:33.0880209Z parm: NVreg_EnableDbgBreakpoint:int 2024-08-07T17:56:33.0880661Z parm: NVreg_EnableNonblockingOpen:int 2024-08-07T17:56:33.0881120Z parm: NVreg_RegistryDwords:charp 2024-08-07T17:56:33.0881586Z parm: NVreg_RegistryDwordsPerDevice:charp 2024-08-07T17:56:33.0882023Z parm: NVreg_RmMsg:charp 2024-08-07T17:56:33.0882422Z parm: NVreg_GpuBlacklist:charp 2024-08-07T17:56:33.0882866Z parm: NVreg_TemporaryFilePath:charp 2024-08-07T17:56:33.0883295Z parm: NVreg_ExcludedGpus:charp 2024-08-07T17:56:33.0883725Z parm: NVreg_DmaRemapPeerMmio:int 2024-08-07T17:56:33.0884190Z parm: NVreg_RmNvlinkBandwidth:charp 2024-08-07T17:56:33.0884631Z parm: NVreg_ImexChannelCount:int 2024-08-07T17:56:33.0885058Z parm: rm_firmware_active:charp 2024-08-07T17:56:33.0885428Z + set +e 2024-08-07T17:56:33.0885713Z + nvidia-smi 2024-08-07T17:56:33.1067346Z Wed Aug 7 17:56:33 2024 2024-08-07T17:56:33.1067886Z +-----------------------------------------------------------------------------------------+ 2024-08-07T17:56:33.1068533Z | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | 2024-08-07T17:56:33.1069207Z |-----------------------------------------+------------------------+----------------------+ 2024-08-07T17:56:33.1069879Z | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | 2024-08-07T17:56:33.1070570Z | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | 2024-08-07T17:56:33.1071448Z | | | MIG M. | 2024-08-07T17:56:33.1071918Z |=========================================+========================+======================| 2024-08-07T17:56:33.1153398Z | 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 | 2024-08-07T17:56:33.1154028Z | N/A 38C P8 14W / 150W | 0MiB / 7680MiB | 0% Default | 2024-08-07T17:56:33.1154547Z | | | N/A | 2024-08-07T17:56:33.1155108Z +-----------------------------------------+------------------------+----------------------+ 2024-08-07T17:56:33.1155813Z 2024-08-07T17:56:33.1156398Z +-----------------------------------------------------------------------------------------+ 2024-08-07T17:56:33.1156984Z | Processes: | 2024-08-07T17:56:33.1157625Z | GPU GI CI PID Type Process name GPU Memory | 2024-08-07T17:56:33.1158179Z | ID ID Usage | 2024-08-07T17:56:33.1158656Z |=========================================================================================| 2024-08-07T17:56:33.1159233Z | No running processes found | 2024-08-07T17:56:33.1159854Z +-----------------------------------------------------------------------------------------+ 2024-08-07T17:56:33.1776862Z + nvidia-smi --query-gpu=gpu_name --format=csv,noheader --id=0 2024-08-07T17:56:33.1974377Z Tesla M60 2024-08-07T17:56:33.2037208Z + NVIDIA_SMI_STATUS=0 2024-08-07T17:56:33.2037933Z + '[' 0 -eq 0 ']' 2024-08-07T17:56:33.2038628Z + echo 'INFO: Ignoring allowed status 0' 2024-08-07T17:56:33.2039231Z + set -e 2024-08-07T17:56:33.2039532Z INFO: Ignoring allowed status 0 2024-08-07T17:56:33.2046427Z == Installing nvidia container toolkit for amzn2023 == 2024-08-07T17:56:33.2050259Z + sudo yum install -y yum-utils 2024-08-07T17:56:33.7605630Z Last metadata expiration check: 2:07:58 ago on Wed Aug 7 15:48:35 2024. 2024-08-07T17:56:33.7910943Z Package dnf-utils-4.3.0-13.amzn2023.0.4.noarch is already installed. 2024-08-07T17:56:33.8340214Z Dependencies resolved. 2024-08-07T17:56:33.8526852Z Nothing to do. 2024-08-07T17:56:33.8527505Z Complete! 2024-08-07T17:56:33.8983464Z + [[ amzn2023 == \a\m\z\n\2\0\2\3 ]] 2024-08-07T17:56:33.8984700Z + YUM_REPO_URL=https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo 2024-08-07T17:56:33.8985871Z + sudo yum-config-manager --add-repo https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo 2024-08-07T17:56:34.2451761Z Adding repo from: https://nvidia.github.io/libnvidia-container/stable/rpm/nvidia-container-toolkit.repo 2024-08-07T17:56:34.3269097Z + sudo yum install -y nvidia-docker2 2024-08-07T17:56:34.9951757Z nvidia-container-toolkit 8.1 kB/s | 833 B 00:00 2024-08-07T17:56:35.0266732Z Package nvidia-docker2-2.14.0-1.noarch is already installed. 2024-08-07T17:56:35.0690183Z Dependencies resolved. 2024-08-07T17:56:35.0881902Z Nothing to do. 2024-08-07T17:56:35.0882609Z Complete! 2024-08-07T17:56:35.1359705Z + sudo systemctl restart docker 2024-08-07T17:56:58.9285991Z nvidia-persistenced failed to initialize. Check syslog for more details. 2024-08-07T17:56:58.9522173Z Wed Aug 7 17:56:58 2024 2024-08-07T17:56:58.9522763Z +-----------------------------------------------------------------------------------------+ 2024-08-07T17:56:58.9523464Z | NVIDIA-SMI 550.54.15 Driver Version: 550.54.15 CUDA Version: 12.4 | 2024-08-07T17:56:58.9524128Z |-----------------------------------------+------------------------+----------------------+ 2024-08-07T17:56:58.9524801Z | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | 2024-08-07T17:56:58.9525881Z | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | 2024-08-07T17:56:58.9526476Z | | | MIG M. | 2024-08-07T17:56:58.9526936Z |=========================================+========================+======================| 2024-08-07T17:56:58.9606562Z | 0 Tesla M60 On | 00000000:00:1E.0 Off | 0 | 2024-08-07T17:56:58.9607179Z | N/A 38C P8 15W / 150W | 0MiB / 7680MiB | 0% Default | 2024-08-07T17:56:58.9608231Z | | | N/A | 2024-08-07T17:56:58.9608958Z +-----------------------------------------+------------------------+----------------------+ 2024-08-07T17:56:58.9609478Z 2024-08-07T17:56:58.9610033Z +-----------------------------------------------------------------------------------------+ 2024-08-07T17:56:58.9610625Z | Processes: | 2024-08-07T17:56:58.9611236Z | GPU GI CI PID Type Process name GPU Memory | 2024-08-07T17:56:58.9611794Z | ID ID Usage | 2024-08-07T17:56:58.9612269Z |=========================================================================================| 2024-08-07T17:56:58.9612855Z | No running processes found | 2024-08-07T17:56:58.9613481Z +-----------------------------------------------------------------------------------------+ 2024-08-07T17:56:59.4615657Z Command completed after 1 attempt(s). 2024-08-07T17:56:59.4693369Z ##[group]Run python3 -m pip install psutil==5.9.1 nvidia-ml-py==11.525.84 2024-08-07T17:56:59.4694200Z python3 -m pip install psutil==5.9.1 nvidia-ml-py==11.525.84 2024-08-07T17:56:59.4694896Z python3 -m tools.stats.monitor > usage_log.txt 2>&1 & 2024-08-07T17:56:59.4696058Z echo "monitor-script-pid=${!}" >> "${GITHUB_OUTPUT}" 2024-08-07T17:56:59.4706295Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:56:59.4706778Z env: 2024-08-07T17:56:59.4707066Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:56:59.4707475Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:56:59.4707931Z ##[endgroup] 2024-08-07T17:56:59.8243839Z Defaulting to user installation because normal site-packages is not writeable 2024-08-07T17:56:59.8462259Z Requirement already satisfied: psutil==5.9.1 in /home/ec2-user/.local/lib/python3.9/site-packages (5.9.1) 2024-08-07T17:56:59.8471047Z Requirement already satisfied: nvidia-ml-py==11.525.84 in /home/ec2-user/.local/lib/python3.9/site-packages (11.525.84) 2024-08-07T17:57:00.0211288Z Prepare all required actions 2024-08-07T17:57:00.0212039Z Getting action download info 2024-08-07T17:57:00.1279671Z Download action repository 'seemethere/download-artifact-s3@v4' (SHA:1da556a7aa0a088e3153970611f6c432d58e80e6) 2024-08-07T17:57:02.7075302Z Download action repository 'actions/download-artifact@v3' (SHA:9bc31d5ccc31df68ecc42ccf4149144866c47d8a) 2024-08-07T17:57:02.9677415Z ##[group]Run ./.github/actions/download-build-artifacts 2024-08-07T17:57:02.9677916Z with: 2024-08-07T17:57:02.9678255Z name: linux-focal-cuda12.1-py3.10-gcc9 2024-08-07T17:57:02.9678682Z s3-bucket: gha-artifacts 2024-08-07T17:57:02.9679043Z env: 2024-08-07T17:57:02.9679338Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:02.9679782Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:02.9680267Z ##[endgroup] 2024-08-07T17:57:02.9727033Z ##[group]Run seemethere/download-artifact-s3@v4 2024-08-07T17:57:02.9727471Z with: 2024-08-07T17:57:02.9727848Z name: linux-focal-cuda12.1-py3.10-gcc9 2024-08-07T17:57:02.9728258Z s3-bucket: gha-artifacts 2024-08-07T17:57:02.9728853Z region: us-east-1 2024-08-07T17:57:02.9729156Z env: 2024-08-07T17:57:02.9729417Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:02.9729845Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:02.9730301Z ##[endgroup] 2024-08-07T17:57:03.6325063Z (node:89455) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2024-08-07T17:57:03.6325675Z 2024-08-07T17:57:03.6325940Z Please migrate your code to use AWS SDK for JavaScript (v3). 2024-08-07T17:57:03.6326598Z For more information, check the migration guide at https://a.co/7PzMCcy 2024-08-07T17:57:03.6327309Z (Use `node --trace-warnings ...` to show where the warning was created) 2024-08-07T17:57:03.7274401Z Found 1 objects with prefix pytorch/pytorch/10288745067/linux-focal-cuda12.1-py3.10-gcc9/ 2024-08-07T17:57:03.7275426Z Starting download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2024-08-07T17:57:16.8680121Z Finished download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/artifacts.zip 2024-08-07T17:57:16.8691011Z Artifact download has finished successfully 2024-08-07T17:57:16.8938417Z ##[group]Run unzip -o artifacts.zip 2024-08-07T17:57:16.8938889Z unzip -o artifacts.zip 2024-08-07T17:57:16.8946289Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:57:16.8946804Z env: 2024-08-07T17:57:16.8947084Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:16.8947550Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:16.8948041Z ##[endgroup] 2024-08-07T17:57:16.8998127Z Archive: artifacts.zip 2024-08-07T17:57:16.8999861Z creating: dist/ 2024-08-07T17:57:19.3229395Z inflating: dist/torch-2.5.0a0+git016588f-cp310-cp310-linux_x86_64.whl 2024-08-07T17:57:19.3230048Z creating: build/custom_test_artifacts/ 2024-08-07T17:57:19.3230563Z creating: build/custom_test_artifacts/custom-op-build/ 2024-08-07T17:57:19.3231214Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/ 2024-08-07T17:57:19.3231952Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/pkgRedirects/ 2024-08-07T17:57:19.3239872Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeConfigureLog.yaml 2024-08-07T17:57:19.3240713Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/ 2024-08-07T17:57:19.3241514Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CMakeSystem.cmake 2024-08-07T17:57:19.3242385Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdC/ 2024-08-07T17:57:19.3243240Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdC/tmp/ 2024-08-07T17:57:19.3245329Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdC/CMakeCCompilerId.c 2024-08-07T17:57:19.3247433Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdC/a.out 2024-08-07T17:57:19.3248372Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCXX/ 2024-08-07T17:57:19.3249300Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCXX/tmp/ 2024-08-07T17:57:19.3251744Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCXX/CMakeCXXCompilerId.cpp 2024-08-07T17:57:19.3253278Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCXX/a.out 2024-08-07T17:57:19.3255930Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CMakeDetermineCompilerABI_C.bin 2024-08-07T17:57:19.3256985Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CMakeCCompiler.cmake 2024-08-07T17:57:19.3259288Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CMakeDetermineCompilerABI_CXX.bin 2024-08-07T17:57:19.3260652Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CMakeCXXCompiler.cmake 2024-08-07T17:57:19.3261839Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/ 2024-08-07T17:57:19.3262762Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/ 2024-08-07T17:57:19.3310679Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp4.ii 2024-08-07T17:57:19.3357570Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.cpp 2024-08-07T17:57:19.3358861Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.module_id 2024-08-07T17:57:19.3413100Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp1.ii 2024-08-07T17:57:19.3414356Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.c 2024-08-07T17:57:19.3415624Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.gpu 2024-08-07T17:57:19.3416936Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.stub.c 2024-08-07T17:57:19.3418199Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.ptx 2024-08-07T17:57:19.3419411Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.sm_52.cubin 2024-08-07T17:57:19.3420649Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin 2024-08-07T17:57:19.3421895Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin.c 2024-08-07T17:57:19.3423208Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.o 2024-08-07T17:57:19.3424433Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.sm_52.cubin 2024-08-07T17:57:19.3425623Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.reg.c 2024-08-07T17:57:19.3426799Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.fatbin 2024-08-07T17:57:19.3427983Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.fatbin.c 2024-08-07T17:57:19.3429116Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.o 2024-08-07T17:57:19.3430319Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/CMakeCUDACompilerId.cu 2024-08-07T17:57:19.3503469Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CompilerIdCUDA/a.out 2024-08-07T17:57:19.3576107Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CMakeDetermineCompilerABI_CUDA.bin 2024-08-07T17:57:19.3577175Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/3.26.4/CMakeCUDACompiler.cmake 2024-08-07T17:57:19.3578130Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeScratch/ 2024-08-07T17:57:19.3578959Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeTmp/ 2024-08-07T17:57:19.3579811Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/cmake.check_cache 2024-08-07T17:57:19.3580683Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/ 2024-08-07T17:57:19.3581671Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.ts 2024-08-07T17:57:19.3582785Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/compiler_depend.make 2024-08-07T17:57:19.3583848Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/depend.make 2024-08-07T17:57:19.3585092Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/link.txt 2024-08-07T17:57:19.3586118Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/cmake_clean.cmake 2024-08-07T17:57:19.3587151Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/build.make 2024-08-07T17:57:19.3588175Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/DependInfo.cmake 2024-08-07T17:57:19.3589208Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/flags.make 2024-08-07T17:57:19.3590220Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/progress.make 2024-08-07T17:57:19.3613535Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o.d 2024-08-07T17:57:19.3785806Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/custom_ops.dir/op.cpp.o 2024-08-07T17:57:19.3786732Z creating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/ 2024-08-07T17:57:19.3787701Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.ts 2024-08-07T17:57:19.3788767Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/compiler_depend.make 2024-08-07T17:57:19.3789809Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/depend.make 2024-08-07T17:57:19.3790785Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/link.txt 2024-08-07T17:57:19.3791787Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/cmake_clean.cmake 2024-08-07T17:57:19.3792785Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/build.make 2024-08-07T17:57:19.3793801Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/DependInfo.cmake 2024-08-07T17:57:19.3794823Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/flags.make 2024-08-07T17:57:19.3796059Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/progress.make 2024-08-07T17:57:19.3820725Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o.d 2024-08-07T17:57:19.3920820Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/test_custom_ops.dir/test_custom_ops.cpp.o 2024-08-07T17:57:19.3921915Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/CMakeDirectoryInformation.cmake 2024-08-07T17:57:19.3922880Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/TargetDirectories.txt 2024-08-07T17:57:19.3923971Z extracting: build/custom_test_artifacts/custom-op-build/CMakeFiles/progress.marks 2024-08-07T17:57:19.3924820Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile2 2024-08-07T17:57:19.3926216Z inflating: build/custom_test_artifacts/custom-op-build/CMakeFiles/Makefile.cmake 2024-08-07T17:57:19.3927024Z inflating: build/custom_test_artifacts/custom-op-build/detect_cuda_version.cc 2024-08-07T17:57:19.3930553Z inflating: build/custom_test_artifacts/custom-op-build/CMakeCache.txt 2024-08-07T17:57:19.3931515Z inflating: build/custom_test_artifacts/custom-op-build/Makefile 2024-08-07T17:57:19.3932541Z inflating: build/custom_test_artifacts/custom-op-build/cmake_install.cmake 2024-08-07T17:57:19.4077105Z inflating: build/custom_test_artifacts/custom-op-build/libcustom_ops.so 2024-08-07T17:57:19.4153444Z inflating: build/custom_test_artifacts/custom-op-build/test_custom_ops 2024-08-07T17:57:19.4154099Z creating: build/custom_test_artifacts/jit-hook-build/ 2024-08-07T17:57:19.4154766Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/ 2024-08-07T17:57:19.4155767Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/pkgRedirects/ 2024-08-07T17:57:19.4164553Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeConfigureLog.yaml 2024-08-07T17:57:19.4165380Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/ 2024-08-07T17:57:19.4166168Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CMakeSystem.cmake 2024-08-07T17:57:19.4167046Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdC/ 2024-08-07T17:57:19.4167887Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdC/tmp/ 2024-08-07T17:57:19.4169474Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdC/CMakeCCompilerId.c 2024-08-07T17:57:19.4171843Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdC/a.out 2024-08-07T17:57:19.4172726Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCXX/ 2024-08-07T17:57:19.4173595Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCXX/tmp/ 2024-08-07T17:57:19.4175810Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCXX/CMakeCXXCompilerId.cpp 2024-08-07T17:57:19.4177399Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCXX/a.out 2024-08-07T17:57:19.4180376Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CMakeDetermineCompilerABI_C.bin 2024-08-07T17:57:19.4181383Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CMakeCCompiler.cmake 2024-08-07T17:57:19.4184063Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CMakeDetermineCompilerABI_CXX.bin 2024-08-07T17:57:19.4186165Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CMakeCXXCompiler.cmake 2024-08-07T17:57:19.4187083Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/ 2024-08-07T17:57:19.4188003Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/ 2024-08-07T17:57:19.4236525Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp4.ii 2024-08-07T17:57:19.4283944Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.cpp 2024-08-07T17:57:19.4285211Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.module_id 2024-08-07T17:57:19.4340772Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp1.ii 2024-08-07T17:57:19.4342062Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.c 2024-08-07T17:57:19.4344032Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.gpu 2024-08-07T17:57:19.4345396Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.stub.c 2024-08-07T17:57:19.4346699Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.ptx 2024-08-07T17:57:19.4348035Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.sm_52.cubin 2024-08-07T17:57:19.4349399Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin 2024-08-07T17:57:19.4350884Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin.c 2024-08-07T17:57:19.4352850Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.o 2024-08-07T17:57:19.4354006Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.sm_52.cubin 2024-08-07T17:57:19.4355298Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.reg.c 2024-08-07T17:57:19.4356685Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.fatbin 2024-08-07T17:57:19.4358289Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.fatbin.c 2024-08-07T17:57:19.4360008Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.o 2024-08-07T17:57:19.4362662Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/CMakeCUDACompilerId.cu 2024-08-07T17:57:19.4436515Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CompilerIdCUDA/a.out 2024-08-07T17:57:19.4510697Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CMakeDetermineCompilerABI_CUDA.bin 2024-08-07T17:57:19.4511811Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/3.26.4/CMakeCUDACompiler.cmake 2024-08-07T17:57:19.4512701Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeScratch/ 2024-08-07T17:57:19.4513459Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeTmp/ 2024-08-07T17:57:19.4514423Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/cmake.check_cache 2024-08-07T17:57:19.4515239Z creating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/ 2024-08-07T17:57:19.4516471Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.ts 2024-08-07T17:57:19.4517774Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/compiler_depend.make 2024-08-07T17:57:19.4518941Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/depend.make 2024-08-07T17:57:19.4520188Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/link.txt 2024-08-07T17:57:19.4521460Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/cmake_clean.cmake 2024-08-07T17:57:19.4523106Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/build.make 2024-08-07T17:57:19.4524419Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/DependInfo.cmake 2024-08-07T17:57:19.4525594Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/flags.make 2024-08-07T17:57:19.4527085Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/progress.make 2024-08-07T17:57:19.4552611Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o.d 2024-08-07T17:57:19.4631203Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/test_jit_hooks.dir/test_jit_hooks.cpp.o 2024-08-07T17:57:19.4632697Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/CMakeDirectoryInformation.cmake 2024-08-07T17:57:19.4633876Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/TargetDirectories.txt 2024-08-07T17:57:19.4634780Z extracting: build/custom_test_artifacts/jit-hook-build/CMakeFiles/progress.marks 2024-08-07T17:57:19.4636328Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile2 2024-08-07T17:57:19.4638632Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeFiles/Makefile.cmake 2024-08-07T17:57:19.4639753Z inflating: build/custom_test_artifacts/jit-hook-build/detect_cuda_version.cc 2024-08-07T17:57:19.4643692Z inflating: build/custom_test_artifacts/jit-hook-build/CMakeCache.txt 2024-08-07T17:57:19.4645137Z inflating: build/custom_test_artifacts/jit-hook-build/Makefile 2024-08-07T17:57:19.4646475Z inflating: build/custom_test_artifacts/jit-hook-build/cmake_install.cmake 2024-08-07T17:57:19.4707929Z inflating: build/custom_test_artifacts/jit-hook-build/test_jit_hooks 2024-08-07T17:57:19.4708604Z creating: build/custom_test_artifacts/custom-backend-build/ 2024-08-07T17:57:19.4709323Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/ 2024-08-07T17:57:19.4710134Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/pkgRedirects/ 2024-08-07T17:57:19.4719278Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeConfigureLog.yaml 2024-08-07T17:57:19.4720165Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/ 2024-08-07T17:57:19.4721060Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CMakeSystem.cmake 2024-08-07T17:57:19.4721974Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdC/ 2024-08-07T17:57:19.4722903Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdC/tmp/ 2024-08-07T17:57:19.4726319Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdC/CMakeCCompilerId.c 2024-08-07T17:57:19.4728831Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdC/a.out 2024-08-07T17:57:19.4729755Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCXX/ 2024-08-07T17:57:19.4730708Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCXX/tmp/ 2024-08-07T17:57:19.4733912Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCXX/CMakeCXXCompilerId.cpp 2024-08-07T17:57:19.4736399Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCXX/a.out 2024-08-07T17:57:19.4739009Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CMakeDetermineCompilerABI_C.bin 2024-08-07T17:57:19.4740114Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CMakeCCompiler.cmake 2024-08-07T17:57:19.4743100Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CMakeDetermineCompilerABI_CXX.bin 2024-08-07T17:57:19.4746757Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CMakeCXXCompiler.cmake 2024-08-07T17:57:19.4747744Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/ 2024-08-07T17:57:19.4748751Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/ 2024-08-07T17:57:19.4797307Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp4.ii 2024-08-07T17:57:19.4844693Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.cpp 2024-08-07T17:57:19.4846264Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.module_id 2024-08-07T17:57:19.4901877Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cpp1.ii 2024-08-07T17:57:19.4903213Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.c 2024-08-07T17:57:19.4904580Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.gpu 2024-08-07T17:57:19.4905945Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.cudafe1.stub.c 2024-08-07T17:57:19.4907283Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.ptx 2024-08-07T17:57:19.4908608Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.sm_52.cubin 2024-08-07T17:57:19.4910376Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin 2024-08-07T17:57:19.4911815Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.fatbin.c 2024-08-07T17:57:19.4913652Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/CMakeCUDACompilerId.o 2024-08-07T17:57:19.4914974Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.sm_52.cubin 2024-08-07T17:57:19.4916420Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.reg.c 2024-08-07T17:57:19.4917584Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.fatbin 2024-08-07T17:57:19.4919184Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.fatbin.c 2024-08-07T17:57:19.4920884Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/tmp/a_dlink.o 2024-08-07T17:57:19.4923553Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/CMakeCUDACompilerId.cu 2024-08-07T17:57:19.4997499Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CompilerIdCUDA/a.out 2024-08-07T17:57:19.5071078Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CMakeDetermineCompilerABI_CUDA.bin 2024-08-07T17:57:19.5072412Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/3.26.4/CMakeCUDACompiler.cmake 2024-08-07T17:57:19.5073388Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeScratch/ 2024-08-07T17:57:19.5074247Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeTmp/ 2024-08-07T17:57:19.5075404Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/cmake.check_cache 2024-08-07T17:57:19.5076347Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/ 2024-08-07T17:57:19.5077520Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.ts 2024-08-07T17:57:19.5078688Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/compiler_depend.make 2024-08-07T17:57:19.5079804Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/depend.make 2024-08-07T17:57:19.5081186Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/link.txt 2024-08-07T17:57:19.5083514Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/cmake_clean.cmake 2024-08-07T17:57:19.5085244Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/build.make 2024-08-07T17:57:19.5086366Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/DependInfo.cmake 2024-08-07T17:57:19.5087442Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/flags.make 2024-08-07T17:57:19.5088984Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/progress.make 2024-08-07T17:57:19.5095352Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o.d 2024-08-07T17:57:19.5247183Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/custom_backend.dir/custom_backend.cpp.o 2024-08-07T17:57:19.5248225Z creating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/ 2024-08-07T17:57:19.5249332Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.ts 2024-08-07T17:57:19.5250529Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/compiler_depend.make 2024-08-07T17:57:19.5251903Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/depend.make 2024-08-07T17:57:19.5252964Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/link.txt 2024-08-07T17:57:19.5254267Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/cmake_clean.cmake 2024-08-07T17:57:19.5255730Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/build.make 2024-08-07T17:57:19.5257087Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/DependInfo.cmake 2024-08-07T17:57:19.5258338Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/flags.make 2024-08-07T17:57:19.5259800Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/progress.make 2024-08-07T17:57:19.5285333Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o.d 2024-08-07T17:57:19.5353161Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/test_custom_backend.dir/test_custom_backend.cpp.o 2024-08-07T17:57:19.5354358Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/CMakeDirectoryInformation.cmake 2024-08-07T17:57:19.5355423Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/TargetDirectories.txt 2024-08-07T17:57:19.5356384Z extracting: build/custom_test_artifacts/custom-backend-build/CMakeFiles/progress.marks 2024-08-07T17:57:19.5358492Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile2 2024-08-07T17:57:19.5360615Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeFiles/Makefile.cmake 2024-08-07T17:57:19.5361522Z inflating: build/custom_test_artifacts/custom-backend-build/detect_cuda_version.cc 2024-08-07T17:57:19.5365517Z inflating: build/custom_test_artifacts/custom-backend-build/CMakeCache.txt 2024-08-07T17:57:19.5367320Z inflating: build/custom_test_artifacts/custom-backend-build/Makefile 2024-08-07T17:57:19.5368540Z inflating: build/custom_test_artifacts/custom-backend-build/cmake_install.cmake 2024-08-07T17:57:19.5495348Z inflating: build/custom_test_artifacts/custom-backend-build/libcustom_backend.so 2024-08-07T17:57:19.5547818Z inflating: build/custom_test_artifacts/custom-backend-build/test_custom_backend 2024-08-07T17:57:19.5548439Z creating: build/lib/ 2024-08-07T17:57:19.5561143Z inflating: build/lib/libpthreadpool.a 2024-08-07T17:57:19.5571698Z inflating: build/lib/libcpuinfo.a 2024-08-07T17:57:19.5581952Z inflating: build/lib/libcpuinfo_internals.a 2024-08-07T17:57:19.5583757Z inflating: build/lib/libclog.a 2024-08-07T17:57:19.5691878Z inflating: build/lib/libprotobuf-lite.a 2024-08-07T17:57:19.5694827Z inflating: build/lib/libnnpack_reference_layers.a 2024-08-07T17:57:19.6244610Z inflating: build/lib/libprotobuf.a 2024-08-07T17:57:19.6324514Z inflating: build/lib/libgtest.a 2024-08-07T17:57:19.6416101Z inflating: build/lib/libbenchmark.a 2024-08-07T17:57:19.6426188Z inflating: build/lib/libittnotify.a 2024-08-07T17:57:19.6448721Z inflating: build/lib/libnnpack.a 2024-08-07T17:57:19.6483149Z inflating: build/lib/libtensorpipe_uv.a 2024-08-07T17:57:19.6561425Z inflating: build/lib/libasmjit.a 2024-08-07T17:57:19.6715641Z inflating: build/lib/libgloo.a 2024-08-07T17:57:19.6741499Z inflating: build/lib/libfmt.a 2024-08-07T17:57:19.6857799Z inflating: build/lib/libc10.so 2024-08-07T17:57:19.6860307Z inflating: build/lib/libcaffe2_nvrtc.so 2024-08-07T17:57:19.6861999Z inflating: build/lib/libfoxi_loader.a 2024-08-07T17:57:19.6864475Z inflating: build/lib/libtorch_global_deps.so 2024-08-07T17:57:19.6888791Z inflating: build/lib/libpytorch_qnnpack.a 2024-08-07T17:57:19.6911899Z inflating: build/lib/libgmock.a 2024-08-07T17:57:19.6913334Z inflating: build/lib/libgtest_main.a 2024-08-07T17:57:19.6914939Z inflating: build/lib/libbenchmark_main.a 2024-08-07T17:57:20.9072010Z inflating: build/lib/libdnnl.a 2024-08-07T17:57:20.9681655Z inflating: build/lib/libprotoc.a 2024-08-07T17:57:21.0371917Z inflating: build/lib/libtensorpipe.a 2024-08-07T17:57:21.0442932Z inflating: build/lib/libc10_cuda.so 2024-08-07T17:57:21.1947318Z inflating: build/lib/libfbgemm.a 2024-08-07T17:57:21.1948343Z inflating: build/lib/libgmock_main.a 2024-08-07T17:57:21.2256676Z inflating: build/lib/libtensorpipe_cuda.a 2024-08-07T17:57:21.2870280Z inflating: build/lib/libkineto.a 2024-08-07T17:57:21.3333795Z inflating: build/lib/libgloo_cuda.a 2024-08-07T17:57:21.3385559Z inflating: build/lib/libonnx_proto.a 2024-08-07T17:57:21.3610432Z inflating: build/lib/libXNNPACK.a 2024-08-07T17:57:21.4445207Z inflating: build/lib/libonnx.a 2024-08-07T17:57:24.3698967Z inflating: build/lib/libtorch_cpu.so 2024-08-07T17:57:24.3704198Z inflating: build/lib/libshm.so 2024-08-07T17:57:24.3710245Z inflating: build/lib/libunbox_lib.a 2024-08-07T17:57:26.8313019Z inflating: build/lib/libtorch_cuda.so 2024-08-07T17:57:26.8314161Z inflating: build/lib/libtorch.so 2024-08-07T17:57:26.8317574Z inflating: build/lib/libc10d_cuda_test.so 2024-08-07T17:57:27.8330806Z inflating: build/lib/libtorch_cuda_linalg.so 2024-08-07T17:57:27.8355881Z inflating: build/lib/libjitbackend_test.so 2024-08-07T17:57:27.8443001Z inflating: build/lib/libtorchbind_test.so 2024-08-07T17:57:27.8474785Z inflating: build/lib/libaoti_custom_ops.so 2024-08-07T17:57:27.8506444Z inflating: build/lib/libbackend_with_compiler.so 2024-08-07T17:57:28.0916215Z inflating: build/lib/libtorch_python.so 2024-08-07T17:57:28.0958375Z inflating: build/lib/libnnapi_backend.so 2024-08-07T17:57:28.0958807Z creating: build/bin/ 2024-08-07T17:57:28.1020735Z inflating: build/bin/c10_CompileTimeFunctionPointer_test 2024-08-07T17:57:28.1083413Z inflating: build/bin/c10_DeviceGuard_test 2024-08-07T17:57:28.1145015Z inflating: build/bin/c10_Device_test 2024-08-07T17:57:28.1217442Z inflating: build/bin/c10_DispatchKeySet_test 2024-08-07T17:57:28.1282415Z inflating: build/bin/c10_Scalar_test 2024-08-07T17:57:28.1342465Z inflating: build/bin/c10_StreamGuard_test 2024-08-07T17:57:28.1405423Z inflating: build/bin/c10_SymInt_test 2024-08-07T17:57:28.1471988Z inflating: build/bin/c10_InlineDeviceGuard_test 2024-08-07T17:57:28.1540282Z inflating: build/bin/c10_InlineStreamGuard_test 2024-08-07T17:57:28.1609310Z inflating: build/bin/c10_SizesAndStrides_test 2024-08-07T17:57:28.1696595Z inflating: build/bin/c10_cow_test 2024-08-07T17:57:28.1761147Z inflating: build/bin/c10_Bitset_test 2024-08-07T17:57:28.1821872Z inflating: build/bin/c10_ConstexprCrc_test 2024-08-07T17:57:28.1882142Z inflating: build/bin/c10_DeadlockDetection_test 2024-08-07T17:57:28.1944714Z inflating: build/bin/c10_Half_test 2024-08-07T17:57:28.2013515Z inflating: build/bin/c10_LeftRight_test 2024-08-07T17:57:28.2080291Z inflating: build/bin/c10_Metaprogramming_test 2024-08-07T17:57:28.2141932Z inflating: build/bin/c10_Synchronized_test 2024-08-07T17:57:28.2209997Z inflating: build/bin/c10_ThreadLocal_test 2024-08-07T17:57:28.2272936Z inflating: build/bin/c10_TypeIndex_test 2024-08-07T17:57:28.2335840Z inflating: build/bin/c10_TypeList_test 2024-08-07T17:57:28.2395543Z inflating: build/bin/c10_TypeTraits_test 2024-08-07T17:57:28.2458684Z inflating: build/bin/c10_accumulate_test 2024-08-07T17:57:28.2527055Z inflating: build/bin/c10_bfloat16_test 2024-08-07T17:57:28.2588624Z inflating: build/bin/c10_bit_cast_test 2024-08-07T17:57:28.2657678Z inflating: build/bin/c10_complex_math_test 2024-08-07T17:57:28.2726025Z inflating: build/bin/c10_complex_test 2024-08-07T17:57:28.2790206Z inflating: build/bin/c10_exception_test 2024-08-07T17:57:28.2852398Z inflating: build/bin/c10_flags_test 2024-08-07T17:57:28.2914759Z inflating: build/bin/c10_irange_test 2024-08-07T17:57:28.2975528Z inflating: build/bin/c10_generic_math_test 2024-08-07T17:57:28.3175210Z inflating: build/bin/c10_intrusive_ptr_test 2024-08-07T17:57:28.3240735Z inflating: build/bin/c10_lazy_test 2024-08-07T17:57:28.3310710Z inflating: build/bin/c10_logging_test 2024-08-07T17:57:28.3401996Z inflating: build/bin/c10_optional_test 2024-08-07T17:57:28.3467617Z inflating: build/bin/c10_registry_test 2024-08-07T17:57:28.3544108Z inflating: build/bin/c10_ordered_preserving_dict_test 2024-08-07T17:57:28.3727147Z inflating: build/bin/c10_small_vector_test 2024-08-07T17:57:28.3790066Z inflating: build/bin/c10_ssize_test 2024-08-07T17:57:28.3853669Z inflating: build/bin/c10_string_util_test 2024-08-07T17:57:28.3915656Z inflating: build/bin/c10_tempfile_test 2024-08-07T17:57:28.3987210Z inflating: build/bin/c10_string_view_test 2024-08-07T17:57:28.4056063Z inflating: build/bin/c10_typeid_test 2024-08-07T17:57:28.4115531Z inflating: build/bin/c10_intrusive_ptr_benchmark 2024-08-07T17:57:28.4659258Z inflating: build/bin/protoc-3.13.0.0 2024-08-07T17:57:28.5203505Z inflating: build/bin/protoc 2024-08-07T17:57:28.5267603Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_1_var_test 2024-08-07T17:57:28.5333339Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_catches_stream 2024-08-07T17:57:28.5397719Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_catches_thread_and_block_and_device 2024-08-07T17:57:28.5461168Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_from_2_processes 2024-08-07T17:57:28.5526129Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_blocks_and_threads 2024-08-07T17:57:28.5590328Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_multiple_blocks 2024-08-07T17:57:28.5654800Z inflating: build/bin/c10_cuda_CUDAAssertionsTest_multiple_writes_from_same_block 2024-08-07T17:57:28.5715365Z inflating: build/bin/c10_cuda_CUDATest 2024-08-07T17:57:28.6114282Z inflating: build/bin/vec_test_all_types_DEFAULT 2024-08-07T17:57:28.6529634Z inflating: build/bin/vec_test_all_types_AVX512 2024-08-07T17:57:28.6964334Z inflating: build/bin/vec_test_all_types_AVX2 2024-08-07T17:57:28.7029741Z inflating: build/bin/HashStoreTest 2024-08-07T17:57:28.7094482Z inflating: build/bin/FileStoreTest 2024-08-07T17:57:28.7160090Z inflating: build/bin/BackoffTest 2024-08-07T17:57:28.7229247Z inflating: build/bin/TCPStoreTest 2024-08-07T17:57:28.7245840Z inflating: build/bin/ProcessGroupMPITest 2024-08-07T17:57:28.7251316Z inflating: build/bin/torch_shm_manager 2024-08-07T17:57:28.7318173Z inflating: build/bin/test_edge_op_registration 2024-08-07T17:57:28.7321821Z inflating: build/bin/example_allreduce 2024-08-07T17:57:28.7385252Z inflating: build/bin/Dimname_test 2024-08-07T17:57:28.7475663Z inflating: build/bin/Dict_test 2024-08-07T17:57:28.7554608Z inflating: build/bin/MaybeOwned_test 2024-08-07T17:57:28.7624863Z inflating: build/bin/NamedTensor_test 2024-08-07T17:57:28.7697506Z inflating: build/bin/apply_utils_test 2024-08-07T17:57:28.7769229Z inflating: build/bin/atest 2024-08-07T17:57:28.7846856Z inflating: build/bin/basic 2024-08-07T17:57:28.7914523Z inflating: build/bin/broadcast_test 2024-08-07T17:57:28.7976456Z inflating: build/bin/cpu_allocator_test 2024-08-07T17:57:28.8042730Z inflating: build/bin/cpu_profiling_allocator_test 2024-08-07T17:57:28.8114064Z inflating: build/bin/cpu_generator_test 2024-08-07T17:57:28.8175168Z inflating: build/bin/dispatch_key_set_test 2024-08-07T17:57:28.8289142Z inflating: build/bin/cpu_rng_test 2024-08-07T17:57:28.8351321Z inflating: build/bin/dlconvertor_test 2024-08-07T17:57:28.8423085Z inflating: build/bin/extension_backend_test 2024-08-07T17:57:28.8490335Z inflating: build/bin/half_test 2024-08-07T17:57:28.8607967Z inflating: build/bin/ivalue_test 2024-08-07T17:57:28.8668492Z inflating: build/bin/lazy_tensor_test 2024-08-07T17:57:28.8735793Z inflating: build/bin/math_kernel_test 2024-08-07T17:57:28.8802049Z inflating: build/bin/memory_format_test 2024-08-07T17:57:28.8866798Z inflating: build/bin/memory_overlapping_test 2024-08-07T17:57:28.8932521Z inflating: build/bin/mobile_memory_cleanup 2024-08-07T17:57:28.8994174Z inflating: build/bin/operator_name_test 2024-08-07T17:57:28.9063394Z inflating: build/bin/native_test 2024-08-07T17:57:28.9127092Z inflating: build/bin/packedtensoraccessor_test 2024-08-07T17:57:28.9189899Z inflating: build/bin/operators_test 2024-08-07T17:57:28.9272322Z inflating: build/bin/pow_test 2024-08-07T17:57:28.9342679Z inflating: build/bin/quantized_test 2024-08-07T17:57:28.9404460Z inflating: build/bin/reduce_ops_test 2024-08-07T17:57:28.9466767Z inflating: build/bin/reportMemoryUsage_test 2024-08-07T17:57:28.9536334Z inflating: build/bin/scalar_tensor_test 2024-08-07T17:57:28.9600105Z inflating: build/bin/StorageUtils_test 2024-08-07T17:57:28.9671394Z inflating: build/bin/scalar_test 2024-08-07T17:57:28.9736493Z inflating: build/bin/stride_properties_test 2024-08-07T17:57:28.9832794Z inflating: build/bin/tensor_iterator_test 2024-08-07T17:57:28.9899552Z inflating: build/bin/test_parallel 2024-08-07T17:57:28.9967472Z inflating: build/bin/type_ptr_test 2024-08-07T17:57:28.9971364Z inflating: build/bin/thread_init_test 2024-08-07T17:57:29.0045025Z inflating: build/bin/type_test 2024-08-07T17:57:29.0109305Z inflating: build/bin/undefined_tensor_test 2024-08-07T17:57:29.0111342Z inflating: build/bin/verify_api_visibility 2024-08-07T17:57:29.0196547Z inflating: build/bin/legacy_vmap_test 2024-08-07T17:57:29.0259699Z inflating: build/bin/weakref_test 2024-08-07T17:57:29.0323936Z inflating: build/bin/wrapdim_test 2024-08-07T17:57:29.0387359Z inflating: build/bin/xla_tensor_test 2024-08-07T17:57:29.0518002Z inflating: build/bin/List_test 2024-08-07T17:57:29.0591187Z inflating: build/bin/IListRef_test 2024-08-07T17:57:29.0740064Z inflating: build/bin/kernel_function_legacy_test 2024-08-07T17:57:29.0857916Z inflating: build/bin/kernel_function_test 2024-08-07T17:57:29.0939370Z inflating: build/bin/KernelFunction_test 2024-08-07T17:57:29.1095222Z inflating: build/bin/kernel_lambda_legacy_test 2024-08-07T17:57:29.1170510Z inflating: build/bin/kernel_stackbased_test 2024-08-07T17:57:29.1297139Z inflating: build/bin/kernel_lambda_test 2024-08-07T17:57:29.1359450Z inflating: build/bin/CppSignature_test 2024-08-07T17:57:29.1477029Z inflating: build/bin/make_boxed_from_unboxed_functor_test 2024-08-07T17:57:29.1545783Z inflating: build/bin/backend_fallback_test 2024-08-07T17:57:29.1606048Z inflating: build/bin/op_allowlist_test 2024-08-07T17:57:29.1683372Z inflating: build/bin/inline_container_test 2024-08-07T17:57:29.2048612Z inflating: build/bin/op_registration_test 2024-08-07T17:57:29.2113400Z inflating: build/bin/cuda_apply_test 2024-08-07T17:57:29.2176769Z inflating: build/bin/cuda_allocator_test 2024-08-07T17:57:29.2249428Z inflating: build/bin/cuda_atomic_ops_test 2024-08-07T17:57:29.2316425Z inflating: build/bin/cuda_caching_host_allocator_test 2024-08-07T17:57:29.2402484Z inflating: build/bin/cuda_complex_math_test 2024-08-07T17:57:29.2474229Z inflating: build/bin/cuda_complex_test 2024-08-07T17:57:29.2535641Z inflating: build/bin/cuda_device_test 2024-08-07T17:57:29.2605978Z inflating: build/bin/cuda_cub_test 2024-08-07T17:57:29.2667906Z inflating: build/bin/cuda_dlconvertor_test 2024-08-07T17:57:29.2731091Z inflating: build/bin/cuda_integer_divider_test 2024-08-07T17:57:29.2811564Z inflating: build/bin/cuda_distributions_test 2024-08-07T17:57:29.2880825Z inflating: build/bin/cuda_generator_test 2024-08-07T17:57:29.2941731Z inflating: build/bin/cuda_half_test 2024-08-07T17:57:29.3002345Z inflating: build/bin/cuda_optional_test 2024-08-07T17:57:29.3067209Z inflating: build/bin/cuda_reportMemoryUsage_test 2024-08-07T17:57:29.3128828Z inflating: build/bin/cuda_allocatorTraceTracker_test 2024-08-07T17:57:29.3203284Z inflating: build/bin/cuda_stream_test 2024-08-07T17:57:29.3266046Z inflating: build/bin/cuda_packedtensoraccessor_test 2024-08-07T17:57:29.3326949Z inflating: build/bin/cuda_cudnn_test 2024-08-07T17:57:29.3390507Z inflating: build/bin/cuda_vectorized_test 2024-08-07T17:57:29.3408570Z inflating: build/bin/tutorial_tensorexpr 2024-08-07T17:57:29.3488521Z inflating: build/bin/ProcessGroupGlooTest 2024-08-07T17:57:29.3559388Z inflating: build/bin/ProcessGroupGlooAsyncTest 2024-08-07T17:57:29.3637371Z inflating: build/bin/ProcessGroupNCCLTest 2024-08-07T17:57:29.3713317Z inflating: build/bin/ProcessGroupNCCLErrorsTest 2024-08-07T17:57:29.3780262Z inflating: build/bin/test_dist_autograd 2024-08-07T17:57:29.3864741Z inflating: build/bin/test_cpp_rpc 2024-08-07T17:57:29.3867956Z inflating: build/bin/parallel_benchmark 2024-08-07T17:57:29.3950598Z inflating: build/bin/test_mobile_nnc 2024-08-07T17:57:29.3961756Z inflating: build/bin/aot_model_compiler_test 2024-08-07T17:57:29.4380774Z inflating: build/bin/test_lazy 2024-08-07T17:57:29.5385683Z inflating: build/bin/test_tensorexpr 2024-08-07T17:57:29.6820036Z inflating: build/bin/test_api 2024-08-07T17:57:29.7525852Z inflating: build/bin/test_jit 2024-08-07T17:57:29.7526304Z creating: .additional_ci_files/ 2024-08-07T17:57:29.7595509Z inflating: .additional_ci_files/test-times.json 2024-08-07T17:57:29.7875392Z inflating: .additional_ci_files/test-class-times.json 2024-08-07T17:57:29.7922861Z ##[group]Run rm artifacts.zip 2024-08-07T17:57:29.7923359Z rm artifacts.zip 2024-08-07T17:57:29.7930451Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:57:29.7930960Z env: 2024-08-07T17:57:29.7931254Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:29.7931675Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:29.7932136Z ##[endgroup] 2024-08-07T17:57:29.8693140Z ##[group]Run df -H 2024-08-07T17:57:29.8693493Z df -H 2024-08-07T17:57:29.8700951Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:57:29.8701437Z env: 2024-08-07T17:57:29.8701704Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:29.8702137Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:29.8702604Z ##[endgroup] 2024-08-07T17:57:29.8752248Z Filesystem Size Used Avail Use% Mounted on 2024-08-07T17:57:29.8752777Z devtmpfs 4.2M 0 4.2M 0% /dev 2024-08-07T17:57:29.8753217Z tmpfs 65G 0 65G 0% /dev/shm 2024-08-07T17:57:29.8753628Z tmpfs 26G 562k 26G 1% /run 2024-08-07T17:57:29.8755057Z /dev/xvda1 161G 38G 124G 24% / 2024-08-07T17:57:29.8755889Z tmpfs 65G 8.2k 65G 1% /tmp 2024-08-07T17:57:29.8756345Z /dev/xvda128 11M 1.4M 9.2M 13% /boot/efi 2024-08-07T17:57:29.8756797Z tmpfs 13G 0 13G 0% /run/user/0 2024-08-07T17:57:29.8797260Z Prepare all required actions 2024-08-07T17:57:29.8797796Z Getting action download info 2024-08-07T17:57:29.9991067Z ##[group]Run ./.github/actions/download-td-artifacts 2024-08-07T17:57:29.9991579Z with: 2024-08-07T17:57:29.9991852Z env: 2024-08-07T17:57:29.9992139Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:29.9992550Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:29.9993011Z ##[endgroup] 2024-08-07T17:57:30.0039580Z ##[group]Run seemethere/download-artifact-s3@v4 2024-08-07T17:57:30.0040046Z with: 2024-08-07T17:57:30.0040337Z name: td_results 2024-08-07T17:57:30.0040656Z s3-bucket: gha-artifacts 2024-08-07T17:57:30.0041036Z region: us-east-1 2024-08-07T17:57:30.0041364Z env: 2024-08-07T17:57:30.0041640Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:30.0042352Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:30.0042845Z ##[endgroup] 2024-08-07T17:57:30.6378285Z (node:89477) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. 2024-08-07T17:57:30.6378910Z 2024-08-07T17:57:30.6379180Z Please migrate your code to use AWS SDK for JavaScript (v3). 2024-08-07T17:57:30.6379834Z For more information, check the migration guide at https://a.co/7PzMCcy 2024-08-07T17:57:30.6380532Z (Use `node --trace-warnings ...` to show where the warning was created) 2024-08-07T17:57:30.7193209Z Found 1 objects with prefix pytorch/pytorch/10288745067/td_results/ 2024-08-07T17:57:30.7194006Z Starting download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/td_results.json 2024-08-07T17:57:30.7837723Z Finished download (1/1): /home/ec2-user/actions-runner/_work/pytorch/pytorch/td_results.json 2024-08-07T17:57:30.7849451Z Artifact download has finished successfully 2024-08-07T17:57:30.8071596Z ##[group]Run mkdir -p .additional_ci_files 2024-08-07T17:57:30.8072147Z mkdir -p .additional_ci_files 2024-08-07T17:57:30.8072714Z mv td_results.json .additional_ci_files/td_results.json 2024-08-07T17:57:30.8080166Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:57:30.8080653Z env: 2024-08-07T17:57:30.8080926Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:30.8081359Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:30.8081820Z ##[endgroup] 2024-08-07T17:57:30.8184990Z ##[group]Run .github/scripts/parse_ref.py 2024-08-07T17:57:30.8185519Z .github/scripts/parse_ref.py 2024-08-07T17:57:30.8192236Z shell: /usr/bin/bash -e {0} 2024-08-07T17:57:30.8192585Z env: 2024-08-07T17:57:30.8192915Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:30.8193349Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:30.8193795Z ##[endgroup] 2024-08-07T17:57:30.8555111Z Prepare all required actions 2024-08-07T17:57:30.8607495Z ##[group]Run ./.github/actions/get-workflow-job-id 2024-08-07T17:57:30.8607980Z with: 2024-08-07T17:57:30.8608482Z github-token: *** 2024-08-07T17:57:30.8608803Z env: 2024-08-07T17:57:30.8609092Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:30.8609511Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:30.8609986Z ##[endgroup] 2024-08-07T17:57:30.8638205Z ##[group]Run set -eux 2024-08-07T17:57:30.8638583Z set -eux 2024-08-07T17:57:30.8639157Z python3 .github/scripts/get_workflow_job_id.py "${GITHUB_RUN_ID}" "${RUNNER_NAME}" 2024-08-07T17:57:30.8646637Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:57:30.8647146Z env: 2024-08-07T17:57:30.8647422Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:30.8647889Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:30.8648582Z GITHUB_TOKEN: *** 2024-08-07T17:57:30.8648903Z ##[endgroup] 2024-08-07T17:57:30.8677655Z + python3 .github/scripts/get_workflow_job_id.py 10288745067 i-07832b6703dca2070 2024-08-07T17:57:34.0719967Z setting job-id=28476182521 2024-08-07T17:57:34.0720732Z setting job-name=linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, amz2023.linux.4xlarge.nvidia.gpu) 2024-08-07T17:57:34.0946815Z Prepare all required actions 2024-08-07T17:57:34.0947390Z Getting action download info 2024-08-07T17:57:34.2037178Z ##[group]Run ./.github/actions/filter-test-configs 2024-08-07T17:57:34.2037650Z with: 2024-08-07T17:57:34.2038160Z github-token: *** 2024-08-07T17:57:34.2040238Z test-matrix: {"include": [{"config": "default", "shard": 1, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 2, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 3, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 4, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 5, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}]} 2024-08-07T17:57:34.2042960Z job-name: linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, amz2023.linux.4xlarge.nvidia.gpu) 2024-08-07T17:57:34.2043682Z env: 2024-08-07T17:57:34.2044046Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:34.2044526Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:34.2045022Z ##[endgroup] 2024-08-07T17:57:34.2100728Z ##[group]Run nick-fields/retry@3e91a01664abd3c5cd539100d10d33b9c5b68482 2024-08-07T17:57:34.2101244Z with: 2024-08-07T17:57:34.2101531Z shell: bash 2024-08-07T17:57:34.2101837Z timeout_minutes: 10 2024-08-07T17:57:34.2102152Z max_attempts: 5 2024-08-07T17:57:34.2102469Z retry_wait_seconds: 30 2024-08-07T17:57:34.2103429Z command: set -eux # PyYAML 6.0 doesn't work with MacOS x86 anymore # This must run on Python-3.7 (AmazonLinux2) so can't use request=3.32.2 python3 -m pip install requests==2.27.1 pyyaml==6.0.1 2024-08-07T17:57:34.2104420Z polling_interval_seconds: 1 2024-08-07T17:57:34.2104793Z warning_on_retry: true 2024-08-07T17:57:34.2105161Z continue_on_error: false 2024-08-07T17:57:34.2105485Z env: 2024-08-07T17:57:34.2105771Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:34.2106211Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:34.2107077Z GITHUB_TOKEN: *** 2024-08-07T17:57:34.2107393Z ##[endgroup] 2024-08-07T17:57:34.3054640Z + python3 -m pip install requests==2.27.1 pyyaml==6.0.1 2024-08-07T17:57:34.6463661Z Defaulting to user installation because normal site-packages is not writeable 2024-08-07T17:57:34.6680496Z Requirement already satisfied: requests==2.27.1 in /home/ec2-user/.local/lib/python3.9/site-packages (2.27.1) 2024-08-07T17:57:34.6686174Z Requirement already satisfied: pyyaml==6.0.1 in /home/ec2-user/.local/lib/python3.9/site-packages (6.0.1) 2024-08-07T17:57:34.6851053Z Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/.local/lib/python3.9/site-packages (from requests==2.27.1) (2024.7.4) 2024-08-07T17:57:34.6866422Z Requirement already satisfied: charset-normalizer~=2.0.0 in /home/ec2-user/.local/lib/python3.9/site-packages (from requests==2.27.1) (2.0.12) 2024-08-07T17:57:34.6872494Z Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/lib/python3.9/site-packages (from requests==2.27.1) (1.25.10) 2024-08-07T17:57:34.6887916Z Requirement already satisfied: idna<4,>=2.5 in /usr/lib/python3.9/site-packages (from requests==2.27.1) (2.10) 2024-08-07T17:57:35.2841101Z Command completed after 1 attempt(s). 2024-08-07T17:57:35.2903766Z ##[group]Run set -x 2024-08-07T17:57:35.2904141Z set -x 2024-08-07T17:57:35.2904449Z  2024-08-07T17:57:35.2904981Z # Use relative path here as this could be checked out anywhere, not necessarily 2024-08-07T17:57:35.2905643Z # in runner workspace 2024-08-07T17:57:35.2906166Z python3 "${GITHUB_ACTION_PATH}/../../scripts/parse_ref.py" 2024-08-07T17:57:35.2914487Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:57:35.2914981Z env: 2024-08-07T17:57:35.2915259Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:35.2915729Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:35.2916190Z ##[endgroup] 2024-08-07T17:57:35.2946609Z + python3 /home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/filter-test-configs/../../scripts/parse_ref.py 2024-08-07T17:57:35.3280248Z ##[group]Run echo "Workflow: ${GITHUB_WORKFLOW}" 2024-08-07T17:57:35.3280796Z echo "Workflow: ${GITHUB_WORKFLOW}" 2024-08-07T17:57:35.3281263Z echo "Job name: ${JOB_NAME}" 2024-08-07T17:57:35.3281654Z  2024-08-07T17:57:35.3282178Z # Use relative path here as this could be checked out anywhere, not necessarily 2024-08-07T17:57:35.3282836Z # in runner workspace 2024-08-07T17:57:35.3283405Z python3 "${GITHUB_ACTION_PATH}/../../scripts/filter_test_configs.py" \ 2024-08-07T17:57:35.3284048Z  --workflow "${GITHUB_WORKFLOW}" \ 2024-08-07T17:57:35.3284514Z  --job-name "${JOB_NAME}" \ 2024-08-07T17:57:35.3287366Z  --test-matrix "{"include": [{"config": "default", "shard": 1, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 2, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 3, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 4, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 5, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}]}" \ 2024-08-07T17:57:35.3289659Z  --selected-test-configs "" \ 2024-08-07T17:57:35.3290112Z  --pr-number "${PR_NUMBER}" \ 2024-08-07T17:57:35.3290520Z  --tag "${TAG}" \ 2024-08-07T17:57:35.3290884Z  --event-name "${EVENT_NAME}" \ 2024-08-07T17:57:35.3291301Z  --schedule "${SCHEDULE}" \ 2024-08-07T17:57:35.3291707Z  --branch "${HEAD_BRANCH}" 2024-08-07T17:57:35.3299072Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:57:35.3299572Z env: 2024-08-07T17:57:35.3299859Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:35.3300275Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:35.3300968Z GITHUB_TOKEN: *** 2024-08-07T17:57:35.3301571Z JOB_NAME: linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, amz2023.linux.4xlarge.nvidia.gpu) 2024-08-07T17:57:35.3302278Z PR_NUMBER: 131248 2024-08-07T17:57:35.3302581Z TAG: 2024-08-07T17:57:35.3302879Z EVENT_NAME: pull_request 2024-08-07T17:57:35.3303244Z SCHEDULE: 2024-08-07T17:57:35.3303531Z HEAD_BRANCH: 2024-08-07T17:57:35.3303850Z ##[endgroup] 2024-08-07T17:57:35.3332482Z Workflow: pull 2024-08-07T17:57:35.3333087Z Job name: linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, amz2023.linux.4xlarge.nvidia.gpu) 2024-08-07T17:57:35.6401209Z INFO:root:Found no test-config label on the PR, so all test configs are included 2024-08-07T17:57:35.8252512Z ##[group]Run echo "Filtered matrix:" 2024-08-07T17:57:35.8252991Z echo "Filtered matrix:" 2024-08-07T17:57:35.8255094Z echo "{"include": [{"config": "default", "shard": 1, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 2, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 3, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 4, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}, {"config": "default", "shard": 5, "num_shards": 5, "runner": "amz2023.linux.4xlarge.nvidia.gpu"}]}" 2024-08-07T17:57:35.8257183Z  2024-08-07T17:57:35.8257465Z echo 2024-08-07T17:57:35.8257828Z echo "Is the current job unstable? False" 2024-08-07T17:57:35.8258257Z  2024-08-07T17:57:35.8258537Z echo 2024-08-07T17:57:35.8258881Z echo "Is keep-going label set? False" 2024-08-07T17:57:35.8259292Z  2024-08-07T17:57:35.8259569Z echo 2024-08-07T17:57:35.8259900Z echo "Renabled issues? " 2024-08-07T17:57:35.8267418Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:57:35.8267898Z env: 2024-08-07T17:57:35.8268404Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:35.8268853Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:35.8269317Z ##[endgroup] 2024-08-07T17:57:35.8298981Z Filtered matrix: 2024-08-07T17:57:35.8301015Z {include: [{config: default, shard: 1, num_shards: 5, runner: amz2023.linux.4xlarge.nvidia.gpu}, {config: default, shard: 2, num_shards: 5, runner: amz2023.linux.4xlarge.nvidia.gpu}, {config: default, shard: 3, num_shards: 5, runner: amz2023.linux.4xlarge.nvidia.gpu}, {config: default, shard: 4, num_shards: 5, runner: amz2023.linux.4xlarge.nvidia.gpu}, {config: default, shard: 5, num_shards: 5, runner: amz2023.linux.4xlarge.nvidia.gpu}]} 2024-08-07T17:57:35.8302946Z 2024-08-07T17:57:35.8303124Z Is the current job unstable? False 2024-08-07T17:57:35.8303388Z 2024-08-07T17:57:35.8303810Z Is keep-going label set? False 2024-08-07T17:57:35.8304075Z 2024-08-07T17:57:35.8304213Z Renabled issues? 2024-08-07T17:57:35.8365159Z ##[group]Run echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2024-08-07T17:57:35.8365875Z echo "timeout=$((JOB_TIMEOUT-30))" >> "${GITHUB_OUTPUT}" 2024-08-07T17:57:35.8372703Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T17:57:35.8373221Z env: 2024-08-07T17:57:35.8373503Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:35.8373964Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:35.8374456Z JOB_TIMEOUT: 360 2024-08-07T17:57:35.8374760Z ##[endgroup] 2024-08-07T17:57:35.8464638Z ##[group]Run set -x 2024-08-07T17:57:35.8465116Z set -x 2024-08-07T17:57:35.8465423Z  2024-08-07T17:57:35.8465774Z if [[ $TEST_CONFIG == 'multigpu' ]]; then 2024-08-07T17:57:35.8466316Z  TEST_COMMAND=.ci/pytorch/multigpu-test.sh 2024-08-07T17:57:35.8466853Z elif [[ $BUILD_ENVIRONMENT == *onnx* ]]; then 2024-08-07T17:57:35.8467378Z  TEST_COMMAND=.ci/onnx/test.sh 2024-08-07T17:57:35.8467807Z else 2024-08-07T17:57:35.8468154Z  TEST_COMMAND=.ci/pytorch/test.sh 2024-08-07T17:57:35.8468593Z fi 2024-08-07T17:57:35.8468896Z  2024-08-07T17:57:35.8469359Z # detached container should get cleaned up by teardown_ec2_linux 2024-08-07T17:57:35.8470095Z # TODO: Stop building test binaries as part of the build phase 2024-08-07T17:57:35.8470747Z # Used for GPU_FLAG since that doesn't play nice 2024-08-07T17:57:35.8471335Z # shellcheck disable=SC2086,SC2090 2024-08-07T17:57:35.8471803Z container_name=$(docker run \ 2024-08-07T17:57:35.8472248Z  ${GPU_FLAG:-} \ 2024-08-07T17:57:35.8472644Z  -e BUILD_ENVIRONMENT \ 2024-08-07T17:57:35.8473052Z  -e PR_NUMBER \ 2024-08-07T17:57:35.8473449Z  -e GITHUB_ACTIONS \ 2024-08-07T17:57:35.8473867Z  -e GITHUB_REPOSITORY \ 2024-08-07T17:57:35.8474288Z  -e GITHUB_WORKFLOW \ 2024-08-07T17:57:35.8474697Z  -e GITHUB_JOB \ 2024-08-07T17:57:35.8475085Z  -e GITHUB_RUN_ID \ 2024-08-07T17:57:35.8475471Z  -e GITHUB_RUN_NUMBER \ 2024-08-07T17:57:35.8475902Z  -e GITHUB_RUN_ATTEMPT \ 2024-08-07T17:57:35.8476325Z  -e JOB_ID \ 2024-08-07T17:57:35.8476671Z  -e JOB_NAME \ 2024-08-07T17:57:35.8477048Z  -e BASE_SHA \ 2024-08-07T17:57:35.8477413Z  -e BRANCH \ 2024-08-07T17:57:35.8477757Z  -e SHA1 \ 2024-08-07T17:57:35.8478136Z  -e AWS_DEFAULT_REGION \ 2024-08-07T17:57:35.8478566Z  -e IN_WHEEL_TEST \ 2024-08-07T17:57:35.8478949Z  -e SHARD_NUMBER \ 2024-08-07T17:57:35.8479346Z  -e TEST_CONFIG \ 2024-08-07T17:57:35.8479743Z  -e NUM_TEST_SHARDS \ 2024-08-07T17:57:35.8480144Z  -e REENABLED_ISSUES \ 2024-08-07T17:57:35.8480582Z  -e CONTINUE_THROUGH_ERROR \ 2024-08-07T17:57:35.8481039Z  -e VERBOSE_TEST_LOGS \ 2024-08-07T17:57:35.8481453Z  -e TEST_SHOWLOCALS \ 2024-08-07T17:57:35.8481865Z  -e NO_TEST_TIMEOUT \ 2024-08-07T17:57:35.8482261Z  -e NO_TD \ 2024-08-07T17:57:35.8482649Z  -e TD_DISTRIBUTED \ 2024-08-07T17:57:35.8483060Z  -e PR_LABELS \ 2024-08-07T17:57:35.8483480Z  -e MAX_JOBS="$(nproc --ignore=2)" \ 2024-08-07T17:57:35.8483927Z  -e SCCACHE_BUCKET \ 2024-08-07T17:57:35.8484348Z  -e SCCACHE_S3_KEY_PREFIX \ 2024-08-07T17:57:35.8484763Z  -e XLA_CUDA \ 2024-08-07T17:57:35.8485164Z  -e XLA_CLANG_CACHE_S3_BUCKET_NAME \ 2024-08-07T17:57:35.8485677Z  -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK \ 2024-08-07T17:57:35.8486192Z  -e PYTORCH_TEST_RERUN_DISABLED_TESTS \ 2024-08-07T17:57:35.8486687Z  -e SKIP_SCCACHE_INITIALIZATION=1 \ 2024-08-07T17:57:35.8487167Z  -e HUGGING_FACE_HUB_TOKEN \ 2024-08-07T17:57:35.8487817Z  -e DASHBOARD_TAG \ 2024-08-07T17:57:35.8488272Z  --env-file="/tmp/github_env_${GITHUB_RUN_ID}" \ 2024-08-07T17:57:35.8488813Z  --security-opt seccomp=unconfined \ 2024-08-07T17:57:35.8489289Z  --cap-add=SYS_PTRACE \ 2024-08-07T17:57:35.8489685Z  --ipc=host \ 2024-08-07T17:57:35.8490073Z  --shm-size="${SHM_SIZE}" \ 2024-08-07T17:57:35.8490495Z  --tty \ 2024-08-07T17:57:35.8490816Z  --detach \ 2024-08-07T17:57:35.8491198Z  --name="${container_name}" \ 2024-08-07T17:57:35.8491780Z  --user jenkins \ 2024-08-07T17:57:35.8492285Z  -v "${GITHUB_WORKSPACE}:/var/lib/jenkins/workspace" \ 2024-08-07T17:57:35.8492840Z  -w /var/lib/jenkins/workspace \ 2024-08-07T17:57:35.8493293Z  "${DOCKER_IMAGE}" 2024-08-07T17:57:35.8493641Z ) 2024-08-07T17:57:35.8494047Z # Propagate download.pytorch.org IP to container 2024-08-07T17:57:35.8494951Z grep download.pytorch.org /etc/hosts | docker exec -i "${container_name}" sudo bash -c "/bin/cat >> /etc/hosts" 2024-08-07T17:57:35.8496451Z echo "DOCKER_CONTAINER_ID=${container_name}" >> "${GITHUB_ENV}" 2024-08-07T17:57:35.8497275Z docker exec -t "${container_name}" sh -c "pip install $(echo dist/*.whl)[opt-einsum] && ${TEST_COMMAND}" 2024-08-07T17:57:35.8503718Z shell: /usr/bin/bash -e {0} 2024-08-07T17:57:35.8504069Z env: 2024-08-07T17:57:35.8504354Z GIT_DEFAULT_BRANCH: main 2024-08-07T17:57:35.8504782Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:57:35.8505325Z BUILD_ENVIRONMENT: linux-focal-cuda12.1-py3.10-gcc9 2024-08-07T17:57:35.8505761Z PR_NUMBER: 131248 2024-08-07T17:57:35.8506079Z GITHUB_REPOSITORY: pytorch/pytorch 2024-08-07T17:57:35.8506454Z GITHUB_WORKFLOW: pull 2024-08-07T17:57:35.8506765Z GITHUB_JOB: test 2024-08-07T17:57:35.8507080Z GITHUB_RUN_ID: 10288745067 2024-08-07T17:57:35.8507441Z GITHUB_RUN_NUMBER: 234358 2024-08-07T17:57:35.8507794Z GITHUB_RUN_ATTEMPT: 1 2024-08-07T17:57:35.8508124Z JOB_ID: 28476182521 2024-08-07T17:57:35.8508710Z JOB_NAME: linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, amz2023.linux.4xlarge.nvidia.gpu) 2024-08-07T17:57:35.8509417Z BRANCH: pull/131248 2024-08-07T17:57:35.8509818Z SHA1: 016588f53c6904b840aa56aa86f95460b4d9c996 2024-08-07T17:57:35.8510310Z BASE_SHA: 6ce09a9bb33e4011761558032e2165ad7b49fb68 2024-08-07T17:57:35.8510783Z TEST_CONFIG: default 2024-08-07T17:57:35.8511139Z SHARD_NUMBER: 3 2024-08-07T17:57:35.8511451Z NUM_TEST_SHARDS: 5 2024-08-07T17:57:35.8511802Z REENABLED_ISSUES: 2024-08-07T17:57:35.8512168Z CONTINUE_THROUGH_ERROR: False 2024-08-07T17:57:35.8512551Z VERBOSE_TEST_LOGS: False 2024-08-07T17:57:35.8512928Z TEST_SHOWLOCALS: False 2024-08-07T17:57:35.8513298Z NO_TEST_TIMEOUT: False 2024-08-07T17:57:35.8513638Z NO_TD: False 2024-08-07T17:57:35.8513958Z TD_DISTRIBUTED: False 2024-08-07T17:57:35.8514403Z SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2 2024-08-07T17:57:35.8514883Z SCCACHE_S3_KEY_PREFIX: pull 2024-08-07T17:57:35.8515264Z SHM_SIZE: 2g 2024-08-07T17:57:35.8516205Z DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:57:35.8517212Z XLA_CUDA: 2024-08-07T17:57:35.8517703Z XLA_CLANG_CACHE_S3_BUCKET_NAME: ossci-compiler-clang-cache-circleci-xla 2024-08-07T17:57:35.8518324Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK: 0 2024-08-07T17:57:35.8518758Z PYTORCH_TEST_RERUN_DISABLED_TESTS: 0 2024-08-07T17:57:35.8519194Z DASHBOARD_TAG: 2024-08-07T17:57:35.8519536Z HUGGING_FACE_HUB_TOKEN: 2024-08-07T17:57:35.8519887Z ##[endgroup] 2024-08-07T17:57:35.8547121Z + [[ default == \m\u\l\t\i\g\p\u ]] 2024-08-07T17:57:35.8547603Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *onnx* ]] 2024-08-07T17:57:35.8548071Z + TEST_COMMAND=.ci/pytorch/test.sh 2024-08-07T17:57:35.8557409Z +++ nproc --ignore=2 2024-08-07T17:57:35.8573807Z ++ docker run --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all -e BUILD_ENVIRONMENT -e PR_NUMBER -e GITHUB_ACTIONS -e GITHUB_REPOSITORY -e GITHUB_WORKFLOW -e GITHUB_JOB -e GITHUB_RUN_ID -e GITHUB_RUN_NUMBER -e GITHUB_RUN_ATTEMPT -e JOB_ID -e JOB_NAME -e BASE_SHA -e BRANCH -e SHA1 -e AWS_DEFAULT_REGION -e IN_WHEEL_TEST -e SHARD_NUMBER -e TEST_CONFIG -e NUM_TEST_SHARDS -e REENABLED_ISSUES -e CONTINUE_THROUGH_ERROR -e VERBOSE_TEST_LOGS -e TEST_SHOWLOCALS -e NO_TEST_TIMEOUT -e NO_TD -e TD_DISTRIBUTED -e PR_LABELS -e MAX_JOBS=14 -e SCCACHE_BUCKET -e SCCACHE_S3_KEY_PREFIX -e XLA_CUDA -e XLA_CLANG_CACHE_S3_BUCKET_NAME -e PYTORCH_TEST_CUDA_MEM_LEAK_CHECK -e PYTORCH_TEST_RERUN_DISABLED_TESTS -e SKIP_SCCACHE_INITIALIZATION=1 -e HUGGING_FACE_HUB_TOKEN -e DASHBOARD_TAG --env-file=/tmp/github_env_10288745067 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --ipc=host --shm-size=2g --tty --detach --name= --user jenkins -v /home/ec2-user/actions-runner/_work/pytorch/pytorch:/var/lib/jenkins/workspace -w /var/lib/jenkins/workspace 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T17:57:46.8780706Z + container_name=b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T17:57:46.8783926Z + grep download.pytorch.org /etc/hosts 2024-08-07T17:57:46.8785611Z + docker exec -i b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 sudo bash -c '/bin/cat >> /etc/hosts' 2024-08-07T17:57:47.0354943Z + echo DOCKER_CONTAINER_ID=b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T17:57:47.0359702Z ++ echo dist/torch-2.5.0a0+git016588f-cp310-cp310-linux_x86_64.whl 2024-08-07T17:57:47.0363589Z + docker exec -t b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 sh -c 'pip install dist/torch-2.5.0a0+git016588f-cp310-cp310-linux_x86_64.whl[opt-einsum] && .ci/pytorch/test.sh' 2024-08-07T17:57:47.6032805Z Processing ./dist/torch-2.5.0a0+git016588f-cp310-cp310-linux_x86_64.whl (from torch==2.5.0a0+git016588f) 2024-08-07T17:57:48.7176971Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.5.0a0+git016588f->torch==2.5.0a0+git016588f) (3.13.1) 2024-08-07T17:57:48.7184563Z Requirement already satisfied: typing-extensions>=4.8.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.5.0a0+git016588f->torch==2.5.0a0+git016588f) (4.12.2) 2024-08-07T17:57:48.7189450Z Requirement already satisfied: networkx in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.5.0a0+git016588f->torch==2.5.0a0+git016588f) (2.8.8) 2024-08-07T17:57:48.7194813Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.5.0a0+git016588f->torch==2.5.0a0+git016588f) (3.1.4) 2024-08-07T17:57:48.7200721Z Requirement already satisfied: fsspec in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.5.0a0+git016588f->torch==2.5.0a0+git016588f) (2024.6.1) 2024-08-07T17:57:48.7214794Z Requirement already satisfied: sympy>=1.13.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.5.0a0+git016588f->torch==2.5.0a0+git016588f) (1.13.1) 2024-08-07T17:57:48.7259041Z Requirement already satisfied: opt-einsum>=3.3 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch==2.5.0a0+git016588f->torch==2.5.0a0+git016588f) (3.3.0) 2024-08-07T17:57:48.7344148Z Requirement already satisfied: numpy>=1.7 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from opt-einsum>=3.3->torch==2.5.0a0+git016588f->torch==2.5.0a0+git016588f) (1.21.2) 2024-08-07T17:57:48.7392693Z Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from sympy>=1.13.0->torch==2.5.0a0+git016588f->torch==2.5.0a0+git016588f) (1.3.0) 2024-08-07T17:57:48.8718321Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from jinja2->torch==2.5.0a0+git016588f->torch==2.5.0a0+git016588f) (2.1.5) 2024-08-07T17:57:50.0575855Z Installing collected packages: torch 2024-08-07T17:58:03.6629944Z Successfully installed torch-2.5.0a0+git016588f 2024-08-07T17:58:03.7416135Z ++ dirname .ci/pytorch/test.sh 2024-08-07T17:58:03.7423403Z + source .ci/pytorch/common.sh 2024-08-07T17:58:03.7427596Z +++ dirname .ci/pytorch/common.sh 2024-08-07T17:58:03.7436256Z ++ source .ci/pytorch/common_utils.sh 2024-08-07T17:58:03.7438939Z +++ declare -f -t trap_add 2024-08-07T17:58:03.7446674Z ++ set -ex 2024-08-07T17:58:03.7447555Z ++ [[ linux-focal-cuda12.1-py3.10-gcc9 == *rocm* ]] 2024-08-07T17:58:03.7448258Z ++ BUILD_TEST_LIBTORCH=0 2024-08-07T17:58:03.7449326Z + [[ linux-focal-cuda12.1-py3.10-gcc9 != *rocm* ]] 2024-08-07T17:58:03.7452685Z ++ stat -c %u /var/lib/jenkins/workspace 2024-08-07T17:58:03.7469550Z + WORKSPACE_ORIGINAL_OWNER_ID=1000 2024-08-07T17:58:03.7469987Z + trap_add cleanup_workspace EXIT 2024-08-07T17:58:03.7470498Z + trap_add_cmd=cleanup_workspace 2024-08-07T17:58:03.7471037Z + shift 2024-08-07T17:58:03.7471320Z + for trap_add_name in "$@" 2024-08-07T17:58:03.7479357Z +++ trap -p EXIT 2024-08-07T17:58:03.7482148Z ++ eval 'extract_trap_cmd ' 2024-08-07T17:58:03.7482543Z +++ extract_trap_cmd 2024-08-07T17:58:03.7482922Z +++ printf '%s\n' '' 2024-08-07T17:58:03.7483645Z ++ printf '%s\n' cleanup_workspace 2024-08-07T17:58:03.7486021Z + trap -- ' 2024-08-07T17:58:03.7486344Z cleanup_workspace' EXIT 2024-08-07T17:58:03.7486748Z + sudo chown -R jenkins /var/lib/jenkins/workspace 2024-08-07T17:58:04.3673114Z + git config --global --add safe.directory /var/lib/jenkins/workspace 2024-08-07T17:58:04.3695505Z + echo 'Environment variables:' 2024-08-07T17:58:04.3696147Z Environment variables: 2024-08-07T17:58:04.3696489Z + env 2024-08-07T17:58:04.3706923Z INSTALLED_DB=yes 2024-08-07T17:58:04.3707640Z NV_LIBCUBLAS_VERSION=12.1.3.1-1 2024-08-07T17:58:04.3708294Z NVIDIA_VISIBLE_DEVICES=all 2024-08-07T17:58:04.3708993Z NV_NVML_DEV_VERSION=12.1.105-1 2024-08-07T17:58:04.3709514Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2024-08-07T17:58:04.3710127Z CONTINUE_THROUGH_ERROR=False 2024-08-07T17:58:04.3710549Z NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.17.1-1+cuda12.1 2024-08-07T17:58:04.3711001Z NV_LIBNCCL_DEV_PACKAGE_VERSION=2.17.1-1 2024-08-07T17:58:04.3711477Z BUILD_ENVIRONMENT=linux-focal-cuda12.1-py3.10-gcc9 2024-08-07T17:58:04.3711933Z HOSTNAME=b555cd11eec4 2024-08-07T17:58:04.3712631Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_187cf2bc-1f5f-47ea-9fcb-6e35aa640090 2024-08-07T17:58:04.3713408Z GITHUB_ACTION=__self 2024-08-07T17:58:04.3713760Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2024-08-07T17:58:04.3717488Z NVIDIA_REQUIRE_CUDA=cuda>=12.1 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 2024-08-07T17:58:04.3721064Z NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-1=12.1.3.1-1 2024-08-07T17:58:04.3721529Z NV_NVTX_VERSION=12.1.105-1 2024-08-07T17:58:04.3721883Z GITHUB_RUN_NUMBER=234358 2024-08-07T17:58:04.3722213Z TEST_CONFIG=default 2024-08-07T17:58:04.3722551Z GITHUB_REPOSITORY_OWNER_ID=21003710 2024-08-07T17:58:04.3722974Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2024-08-07T17:58:04.3723377Z NV_CUDA_CUDART_DEV_VERSION=12.1.105-1 2024-08-07T17:58:04.3723786Z NV_LIBCUSPARSE_VERSION=12.1.0.106-1 2024-08-07T17:58:04.3724184Z NV_LIBNPP_VERSION=12.1.0.40-1 2024-08-07T17:58:04.3724865Z GITHUB_TRIGGERING_ACTOR=zdevito 2024-08-07T17:58:04.3725298Z CMAKE_CUDA_COMPILER_LAUNCHER=/opt/cache/bin/sccache 2024-08-07T17:58:04.3725745Z GITHUB_REF_TYPE=branch 2024-08-07T17:58:04.3726067Z TORCH_CUDA_ARCH_LIST=Maxwell 2024-08-07T17:58:04.3726423Z NCCL_VERSION=2.17.1-1 2024-08-07T17:58:04.3726792Z BASE_SHA=6ce09a9bb33e4011761558032e2165ad7b49fb68 2024-08-07T17:58:04.3727196Z XLA_CUDA= 2024-08-07T17:58:04.3727486Z HUGGING_FACE_HUB_TOKEN= 2024-08-07T17:58:04.3728031Z *** 2024-08-07T17:58:04.3728313Z CARGO_NET_GIT_FETCH_WITH_CLI=true 2024-08-07T17:58:04.3728703Z GITHUB_REPOSITORY_ID=65600975 2024-08-07T17:58:04.3729066Z GITHUB_ACTIONS=true 2024-08-07T17:58:04.3729550Z NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:58:04.3730006Z NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-1=12.1.105-1 2024-08-07T17:58:04.3730474Z NV_LIBNPP_PACKAGE=libnpp-12-1=12.1.0.40-1 2024-08-07T17:58:04.3730901Z SHA1=016588f53c6904b840aa56aa86f95460b4d9c996 2024-08-07T17:58:04.3731344Z NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev 2024-08-07T17:58:04.3731809Z GITHUB_SHA=f779f6b7738020e244184bded4026b37de3f9f24 2024-08-07T17:58:04.3732457Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/pull.yml@refs/pull/131248/merge 2024-08-07T17:58:04.3733080Z UCC_HOME=/usr 2024-08-07T17:58:04.3733403Z NV_LIBCUBLAS_DEV_VERSION=12.1.3.1-1 2024-08-07T17:58:04.3733775Z VERBOSE_TEST_LOGS=False 2024-08-07T17:58:04.3734115Z NVIDIA_PRODUCT_NAME=CUDA 2024-08-07T17:58:04.3734518Z NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-1 2024-08-07T17:58:04.3734954Z GITHUB_REF=refs/pull/131248/merge 2024-08-07T17:58:04.3735340Z NV_CUDA_CUDART_VERSION=12.1.105-1 2024-08-07T17:58:04.3735707Z SHARD_NUMBER=3 2024-08-07T17:58:04.3736012Z GITHUB_REF_PROTECTED=false 2024-08-07T17:58:04.3736366Z HOME=/var/lib/jenkins 2024-08-07T17:58:04.3736741Z GITHUB_API_URL=https://api.github.com 2024-08-07T17:58:04.3737144Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2024-08-07T17:58:04.3737591Z UCX_COMMIT=7bb2722ff2187a0cad557ae4a6afa090569f83fb 2024-08-07T17:58:04.3738026Z SCCACHE_S3_KEY_PREFIX=pull 2024-08-07T17:58:04.3738383Z CUDA_VERSION=12.1.1 2024-08-07T17:58:04.3738737Z NV_LIBCUBLAS_PACKAGE=libcublas-12-1=12.1.3.1-1 2024-08-07T17:58:04.3739140Z NUM_TEST_SHARDS=5 2024-08-07T17:58:04.3739444Z UCX_HOME=/usr 2024-08-07T17:58:04.3739884Z NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-1=12.1.1-1 2024-08-07T17:58:04.3740794Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_187cf2bc-1f5f-47ea-9fcb-6e35aa640090 2024-08-07T17:58:04.3741858Z JOB_NAME=linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, amz2023.linux.4xlarge.nvidia.gpu) 2024-08-07T17:58:04.3742972Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_187cf2bc-1f5f-47ea-9fcb-6e35aa640090 2024-08-07T17:58:04.3744038Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2024-08-07T17:58:04.3744703Z GITHUB_EVENT_NAME=pull_request 2024-08-07T17:58:04.3745090Z DASHBOARD_TAG= 2024-08-07T17:58:04.3745413Z GITHUB_RUN_ID=10288745067 2024-08-07T17:58:04.3745829Z NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-1=12.1.0.40-1 2024-08-07T17:58:04.3746332Z NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-1 2024-08-07T17:58:04.3747240Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_187cf2bc-1f5f-47ea-9fcb-6e35aa640090 2024-08-07T17:58:04.3748102Z GITHUB_ACTOR=zdevito 2024-08-07T17:58:04.3748463Z NV_LIBNPP_DEV_VERSION=12.1.0.40-1 2024-08-07T17:58:04.3748860Z PR_NUMBER=131248 2024-08-07T17:58:04.3749173Z GITHUB_RUN_ATTEMPT=1 2024-08-07T17:58:04.3749533Z ANACONDA_PYTHON_VERSION=3.10 2024-08-07T17:58:04.3749999Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2024-08-07T17:58:04.3750469Z TERM=xterm 2024-08-07T17:58:04.3750814Z NV_LIBCUSPARSE_DEV_VERSION=12.1.0.106-1 2024-08-07T17:58:04.3751247Z INSTALLED_VISION=yes 2024-08-07T17:58:04.3751590Z BRANCH=pull/131248 2024-08-07T17:58:04.3751920Z OPENSSL_ROOT_DIR=/opt/openssl 2024-08-07T17:58:04.3752342Z LIBRARY_PATH=/usr/local/cuda/lib64/stubs 2024-08-07T17:58:04.3752783Z CUDA_PATH=/usr/local/cuda 2024-08-07T17:58:04.3753677Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2024-08-07T17:58:04.3754420Z GITHUB_SERVER_URL=https://github.com 2024-08-07T17:58:04.3754897Z UCC_COMMIT=20eae37090a4ce1b32bcce6144ccad0b49943e0b 2024-08-07T17:58:04.3755333Z REENABLED_ISSUES= 2024-08-07T17:58:04.3755640Z SHLVL=1 2024-08-07T17:58:04.3755915Z MAX_JOBS=14 2024-08-07T17:58:04.3756199Z NV_CUDA_LIB_VERSION=12.1.1-1 2024-08-07T17:58:04.3756559Z NVARCH=x86_64 2024-08-07T17:58:04.3756865Z GITHUB_ACTOR_ID=370202 2024-08-07T17:58:04.3757377Z GITHUB_WORKFLOW_SHA=f779f6b7738020e244184bded4026b37de3f9f24 2024-08-07T17:58:04.3757891Z GITHUB_REF_NAME=131248/merge 2024-08-07T17:58:04.3758262Z NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-1 2024-08-07T17:58:04.3758866Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2024-08-07T17:58:04.3759418Z GITHUB_JOB=test 2024-08-07T17:58:04.3759768Z NV_LIBNCCL_PACKAGE=libnccl2=2.17.1-1+cuda12.1 2024-08-07T17:58:04.3760280Z LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2024-08-07T17:58:04.3760816Z NO_TEST_TIMEOUT=False 2024-08-07T17:58:04.3761152Z TD_DISTRIBUTED=False 2024-08-07T17:58:04.3761485Z NV_CUDA_NSIGHT_COMPUTE_VERSION=12.1.1-1 2024-08-07T17:58:04.3761906Z GITHUB_REPOSITORY=pytorch/pytorch 2024-08-07T17:58:04.3762298Z NV_NVPROF_VERSION=12.1.105-1 2024-08-07T17:58:04.3762639Z GITHUB_RETENTION_DAYS=90 2024-08-07T17:58:04.3762992Z OPENSSL_DIR=/opt/openssl 2024-08-07T17:58:04.3763320Z GITHUB_ACTION_REPOSITORY= 2024-08-07T17:58:04.3764259Z PATH=/opt/cache/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2024-08-07T17:58:04.3765234Z GITHUB_BASE_REF=gh/zdevito/267/base 2024-08-07T17:58:04.3765642Z NV_LIBNCCL_PACKAGE_NAME=libnccl2 2024-08-07T17:58:04.3765985Z CI=true 2024-08-07T17:58:04.3766281Z NV_LIBNCCL_PACKAGE_VERSION=2.17.1-1 2024-08-07T17:58:04.3766670Z GITHUB_REPOSITORY_OWNER=pytorch 2024-08-07T17:58:04.3767044Z JOB_ID=28476182521 2024-08-07T17:58:04.3767360Z INSTALLED_PROTOBUF=yes 2024-08-07T17:58:04.3767690Z GITHUB_HEAD_REF=gh/zdevito/267/head 2024-08-07T17:58:04.3768069Z GITHUB_ACTION_REF= 2024-08-07T17:58:04.3768476Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2024-08-07T17:58:04.3768913Z TEST_SHOWLOCALS=False 2024-08-07T17:58:04.3769243Z GITHUB_WORKFLOW=pull 2024-08-07T17:58:04.3769583Z DEBIAN_FRONTEND=noninteractive 2024-08-07T17:58:04.3770342Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_187cf2bc-1f5f-47ea-9fcb-6e35aa640090 2024-08-07T17:58:04.3771126Z NO_TD=False 2024-08-07T17:58:04.3771447Z SKIP_SCCACHE_INITIALIZATION=1 2024-08-07T17:58:04.3771799Z _=/usr/bin/env 2024-08-07T17:58:04.3772213Z ++ python -c 'import site; print(site.getsitepackages()[0])' 2024-08-07T17:58:04.3937267Z + TORCH_INSTALL_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch 2024-08-07T17:58:04.3938031Z + TORCH_BIN_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/bin 2024-08-07T17:58:04.3939137Z + TORCH_LIB_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib 2024-08-07T17:58:04.3940021Z + TORCH_TEST_DIR=/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/test 2024-08-07T17:58:04.3940582Z + BUILD_DIR=build 2024-08-07T17:58:04.3940899Z + BUILD_RENAMED_DIR=build_renamed 2024-08-07T17:58:04.3941284Z + BUILD_BIN_DIR=build/bin 2024-08-07T17:58:04.3941625Z + SHARD_NUMBER=3 2024-08-07T17:58:04.3941916Z + NUM_TEST_SHARDS=5 2024-08-07T17:58:04.3942233Z + export VALGRIND=ON 2024-08-07T17:58:04.3942533Z + VALGRIND=ON 2024-08-07T17:58:04.3942917Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *clang9* ]] 2024-08-07T17:58:04.3943362Z + [[ 0 == \1 ]] 2024-08-07T17:58:04.3943643Z + [[ False == \1 ]] 2024-08-07T17:58:04.3944012Z + [[ linux-focal-cuda12.1-py3.10-gcc9 != *bazel* ]] 2024-08-07T17:58:04.3945351Z ++ realpath build/custom_test_artifacts 2024-08-07T17:58:04.3956830Z + CUSTOM_TEST_ARTIFACT_BUILD_DIR=/var/lib/jenkins/workspace/build/custom_test_artifacts 2024-08-07T17:58:04.3957725Z + [[ -n '' ]] 2024-08-07T17:58:04.3958045Z + echo 'Environment variables' 2024-08-07T17:58:04.3958420Z Environment variables 2024-08-07T17:58:04.3958720Z + env 2024-08-07T17:58:04.3966669Z INSTALLED_DB=yes 2024-08-07T17:58:04.3967306Z NV_LIBCUBLAS_VERSION=12.1.3.1-1 2024-08-07T17:58:04.3967972Z NVIDIA_VISIBLE_DEVICES=all 2024-08-07T17:58:04.3968406Z NV_NVML_DEV_VERSION=12.1.105-1 2024-08-07T17:58:04.3968920Z GITHUB_WORKSPACE=/home/ec2-user/actions-runner/_work/pytorch/pytorch 2024-08-07T17:58:04.3969454Z CONTINUE_THROUGH_ERROR=False 2024-08-07T17:58:04.3970186Z NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.17.1-1+cuda12.1 2024-08-07T17:58:04.3970937Z NV_LIBNCCL_DEV_PACKAGE_VERSION=2.17.1-1 2024-08-07T17:58:04.3971395Z BUILD_ENVIRONMENT=linux-focal-cuda12.1-py3.10-gcc9 2024-08-07T17:58:04.3971851Z HOSTNAME=b555cd11eec4 2024-08-07T17:58:04.3972572Z GITHUB_PATH=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/add_path_187cf2bc-1f5f-47ea-9fcb-6e35aa640090 2024-08-07T17:58:04.3973367Z GITHUB_ACTION=__self 2024-08-07T17:58:04.3973766Z PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=0 2024-08-07T17:58:04.3978454Z NVIDIA_REQUIRE_CUDA=cuda>=12.1 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526 2024-08-07T17:58:04.3982611Z NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-1=12.1.3.1-1 2024-08-07T17:58:04.3983096Z NV_NVTX_VERSION=12.1.105-1 2024-08-07T17:58:04.3983430Z GITHUB_RUN_NUMBER=234358 2024-08-07T17:58:04.3983771Z TEST_CONFIG=default 2024-08-07T17:58:04.3984160Z GITHUB_REPOSITORY_OWNER_ID=21003710 2024-08-07T17:58:04.3984560Z TORCH_NVCC_FLAGS=-Xfatbin -compress-all 2024-08-07T17:58:04.3984981Z NV_CUDA_CUDART_DEV_VERSION=12.1.105-1 2024-08-07T17:58:04.3985388Z NV_LIBCUSPARSE_VERSION=12.1.0.106-1 2024-08-07T17:58:04.3985765Z NV_LIBNPP_VERSION=12.1.0.40-1 2024-08-07T17:58:04.3986141Z GITHUB_TRIGGERING_ACTOR=zdevito 2024-08-07T17:58:04.3986572Z CMAKE_CUDA_COMPILER_LAUNCHER=/opt/cache/bin/sccache 2024-08-07T17:58:04.3987004Z GITHUB_REF_TYPE=branch 2024-08-07T17:58:04.3987353Z TORCH_CUDA_ARCH_LIST=Maxwell 2024-08-07T17:58:04.3987711Z NCCL_VERSION=2.17.1-1 2024-08-07T17:58:04.3988068Z BASE_SHA=6ce09a9bb33e4011761558032e2165ad7b49fb68 2024-08-07T17:58:04.3988490Z XLA_CUDA= 2024-08-07T17:58:04.3988765Z HUGGING_FACE_HUB_TOKEN= 2024-08-07T17:58:04.3989226Z *** 2024-08-07T17:58:04.3989525Z CARGO_NET_GIT_FETCH_WITH_CLI=true 2024-08-07T17:58:04.3989927Z GITHUB_REPOSITORY_ID=65600975 2024-08-07T17:58:04.3990274Z GITHUB_ACTIONS=true 2024-08-07T17:58:04.3990611Z NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T17:58:04.3991024Z NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-1=12.1.105-1 2024-08-07T17:58:04.3991493Z NV_LIBNPP_PACKAGE=libnpp-12-1=12.1.0.40-1 2024-08-07T17:58:04.3991942Z SHA1=016588f53c6904b840aa56aa86f95460b4d9c996 2024-08-07T17:58:04.3992376Z NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev 2024-08-07T17:58:04.3992834Z GITHUB_SHA=f779f6b7738020e244184bded4026b37de3f9f24 2024-08-07T17:58:04.3993505Z GITHUB_WORKFLOW_REF=pytorch/pytorch/.github/workflows/pull.yml@refs/pull/131248/merge 2024-08-07T17:58:04.3994124Z UCC_HOME=/usr 2024-08-07T17:58:04.3994426Z NV_LIBCUBLAS_DEV_VERSION=12.1.3.1-1 2024-08-07T17:58:04.3994818Z VERBOSE_TEST_LOGS=False 2024-08-07T17:58:04.3995644Z NVIDIA_PRODUCT_NAME=CUDA 2024-08-07T17:58:04.3996059Z NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-1 2024-08-07T17:58:04.3996515Z GITHUB_REF=refs/pull/131248/merge 2024-08-07T17:58:04.3997095Z NV_CUDA_CUDART_VERSION=12.1.105-1 2024-08-07T17:58:04.3997458Z SHARD_NUMBER=3 2024-08-07T17:58:04.3997769Z GITHUB_REF_PROTECTED=false 2024-08-07T17:58:04.3998098Z HOME=/var/lib/jenkins 2024-08-07T17:58:04.3998461Z GITHUB_API_URL=https://api.github.com 2024-08-07T17:58:04.3998884Z PYTORCH_TEST_RERUN_DISABLED_TESTS=0 2024-08-07T17:58:04.3999313Z UCX_COMMIT=7bb2722ff2187a0cad557ae4a6afa090569f83fb 2024-08-07T17:58:04.3999763Z SCCACHE_S3_KEY_PREFIX=pull 2024-08-07T17:58:04.4000109Z CUDA_VERSION=12.1.1 2024-08-07T17:58:04.4000449Z NV_LIBCUBLAS_PACKAGE=libcublas-12-1=12.1.3.1-1 2024-08-07T17:58:04.4000864Z NUM_TEST_SHARDS=5 2024-08-07T17:58:04.4001291Z UCX_HOME=/usr 2024-08-07T17:58:04.4001739Z NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-1=12.1.1-1 2024-08-07T17:58:04.4002663Z GITHUB_STATE=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/save_state_187cf2bc-1f5f-47ea-9fcb-6e35aa640090 2024-08-07T17:58:04.4003723Z JOB_NAME=linux-focal-cuda12.1-py3.10-gcc9 / test (default, 3, 5, amz2023.linux.4xlarge.nvidia.gpu) 2024-08-07T17:58:04.4004838Z GITHUB_ENV=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_env_187cf2bc-1f5f-47ea-9fcb-6e35aa640090 2024-08-07T17:58:04.4005893Z GITHUB_EVENT_PATH=/home/ec2-user/actions-runner/_work/_temp/_github_workflow/event.json 2024-08-07T17:58:04.4006572Z GITHUB_EVENT_NAME=pull_request 2024-08-07T17:58:04.4006951Z DASHBOARD_TAG= 2024-08-07T17:58:04.4007274Z GITHUB_RUN_ID=10288745067 2024-08-07T17:58:04.4007696Z NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-1=12.1.0.40-1 2024-08-07T17:58:04.4008182Z NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-1 2024-08-07T17:58:04.4009096Z GITHUB_STEP_SUMMARY=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/step_summary_187cf2bc-1f5f-47ea-9fcb-6e35aa640090 2024-08-07T17:58:04.4009989Z GITHUB_ACTOR=zdevito 2024-08-07T17:58:04.4010332Z NV_LIBNPP_DEV_VERSION=12.1.0.40-1 2024-08-07T17:58:04.4010733Z PR_NUMBER=131248 2024-08-07T17:58:04.4011063Z GITHUB_RUN_ATTEMPT=1 2024-08-07T17:58:04.4011387Z VALGRIND=ON 2024-08-07T17:58:04.4011722Z ANACONDA_PYTHON_VERSION=3.10 2024-08-07T17:58:04.4012191Z GITHUB_GRAPHQL_URL=https://api.github.com/graphql 2024-08-07T17:58:04.4012656Z TERM=xterm 2024-08-07T17:58:04.4012990Z NV_LIBCUSPARSE_DEV_VERSION=12.1.0.106-1 2024-08-07T17:58:04.4013424Z INSTALLED_VISION=yes 2024-08-07T17:58:04.4013752Z BRANCH=pull/131248 2024-08-07T17:58:04.4014102Z OPENSSL_ROOT_DIR=/opt/openssl 2024-08-07T17:58:04.4014519Z LIBRARY_PATH=/usr/local/cuda/lib64/stubs 2024-08-07T17:58:04.4014944Z CUDA_PATH=/usr/local/cuda 2024-08-07T17:58:04.4015641Z GITHUB_ACTION_PATH=/home/ec2-user/actions-runner/_work/pytorch/pytorch/./.github/actions/setup-linux 2024-08-07T17:58:04.4016417Z GITHUB_SERVER_URL=https://github.com 2024-08-07T17:58:04.4016897Z UCC_COMMIT=20eae37090a4ce1b32bcce6144ccad0b49943e0b 2024-08-07T17:58:04.4017380Z REENABLED_ISSUES= 2024-08-07T17:58:04.4017704Z SHLVL=1 2024-08-07T17:58:04.4017974Z MAX_JOBS=14 2024-08-07T17:58:04.4018293Z NV_CUDA_LIB_VERSION=12.1.1-1 2024-08-07T17:58:04.4018683Z NVARCH=x86_64 2024-08-07T17:58:04.4018991Z GITHUB_ACTOR_ID=370202 2024-08-07T17:58:04.4019451Z GITHUB_WORKFLOW_SHA=f779f6b7738020e244184bded4026b37de3f9f24 2024-08-07T17:58:04.4019959Z GITHUB_REF_NAME=131248/merge 2024-08-07T17:58:04.4020376Z NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-1 2024-08-07T17:58:04.4020980Z XLA_CLANG_CACHE_S3_BUCKET_NAME=ossci-compiler-clang-cache-circleci-xla 2024-08-07T17:58:04.4021539Z GITHUB_JOB=test 2024-08-07T17:58:04.4021904Z NV_LIBNCCL_PACKAGE=libnccl2=2.17.1-1+cuda12.1 2024-08-07T17:58:04.4022461Z LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 2024-08-07T17:58:04.4022961Z NO_TEST_TIMEOUT=False 2024-08-07T17:58:04.4023318Z TD_DISTRIBUTED=False 2024-08-07T17:58:04.4023693Z NV_CUDA_NSIGHT_COMPUTE_VERSION=12.1.1-1 2024-08-07T17:58:04.4024128Z GITHUB_REPOSITORY=pytorch/pytorch 2024-08-07T17:58:04.4024551Z NV_NVPROF_VERSION=12.1.105-1 2024-08-07T17:58:04.4024940Z GITHUB_RETENTION_DAYS=90 2024-08-07T17:58:04.4025291Z OPENSSL_DIR=/opt/openssl 2024-08-07T17:58:04.4025788Z GITHUB_ACTION_REPOSITORY= 2024-08-07T17:58:04.4026821Z PATH=/opt/cache/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2024-08-07T17:58:04.4027883Z GITHUB_BASE_REF=gh/zdevito/267/base 2024-08-07T17:58:04.4028321Z NV_LIBNCCL_PACKAGE_NAME=libnccl2 2024-08-07T17:58:04.4028713Z CI=true 2024-08-07T17:58:04.4029005Z NV_LIBNCCL_PACKAGE_VERSION=2.17.1-1 2024-08-07T17:58:04.4029438Z GITHUB_REPOSITORY_OWNER=pytorch 2024-08-07T17:58:04.4029827Z JOB_ID=28476182521 2024-08-07T17:58:04.4030139Z INSTALLED_PROTOBUF=yes 2024-08-07T17:58:04.4030605Z GITHUB_HEAD_REF=gh/zdevito/267/head 2024-08-07T17:58:04.4031048Z GITHUB_ACTION_REF= 2024-08-07T17:58:04.4031435Z SCCACHE_BUCKET=ossci-compiler-cache-circleci-v2 2024-08-07T17:58:04.4031933Z TEST_SHOWLOCALS=False 2024-08-07T17:58:04.4032286Z GITHUB_WORKFLOW=pull 2024-08-07T17:58:04.4032633Z DEBIAN_FRONTEND=noninteractive 2024-08-07T17:58:04.4033460Z GITHUB_OUTPUT=/home/ec2-user/actions-runner/_work/_temp/_runner_file_commands/set_output_187cf2bc-1f5f-47ea-9fcb-6e35aa640090 2024-08-07T17:58:04.4034310Z NO_TD=False 2024-08-07T17:58:04.4034619Z SKIP_SCCACHE_INITIALIZATION=1 2024-08-07T17:58:04.4035008Z _=/usr/bin/env 2024-08-07T17:58:04.4035333Z + echo 'Testing pytorch' 2024-08-07T17:58:04.4035678Z Testing pytorch 2024-08-07T17:58:04.4036033Z + export LANG=C.UTF-8 2024-08-07T17:58:04.4036358Z + LANG=C.UTF-8 2024-08-07T17:58:04.4036697Z + PR_NUMBER=131248 2024-08-07T17:58:04.4037045Z + [[ default == \d\e\f\a\u\l\t ]] 2024-08-07T17:58:04.4037460Z + export CUDA_VISIBLE_DEVICES=0 2024-08-07T17:58:04.4037846Z + CUDA_VISIBLE_DEVICES=0 2024-08-07T17:58:04.4038235Z + export HIP_VISIBLE_DEVICES=0 2024-08-07T17:58:04.4038637Z + HIP_VISIBLE_DEVICES=0 2024-08-07T17:58:04.4039001Z + [[ default == \d\i\s\t\r\i\b\u\t\e\d ]] 2024-08-07T17:58:04.4039434Z + [[ default == \s\l\o\w ]] 2024-08-07T17:58:04.4039902Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *slow-gradcheck* ]] 2024-08-07T17:58:04.4040483Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *cuda* ]] 2024-08-07T17:58:04.4041016Z + export PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda 2024-08-07T17:58:04.4041481Z + PYTORCH_TESTING_DEVICE_ONLY_FOR=cuda 2024-08-07T17:58:04.4041915Z + [[ default == *crossref* ]] 2024-08-07T17:58:04.4042359Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *rocm* ]] 2024-08-07T17:58:04.4042867Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *xpu* ]] 2024-08-07T17:58:04.4043406Z + [[ linux-focal-cuda12.1-py3.10-gcc9 != *-bazel-* ]] 2024-08-07T17:58:04.4043903Z + pip_install --user ninja==1.10.2 2024-08-07T17:58:04.4044376Z + pip install --progress-bar off --user ninja==1.10.2 2024-08-07T17:58:05.0080318Z Collecting ninja==1.10.2 2024-08-07T17:58:05.0320506Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl.metadata (5.0 kB) 2024-08-07T17:58:05.0510050Z Downloading ninja-1.10.2-py2.py3-none-manylinux_2_5_x86_64.manylinux1_x86_64.whl (108 kB) 2024-08-07T17:58:06.2789657Z Installing collected packages: ninja 2024-08-07T17:58:06.2900786Z  WARNING: The script ninja is installed in '/var/lib/jenkins/.local/bin' which is not on PATH. 2024-08-07T17:58:06.2902014Z Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. 2024-08-07T17:58:06.2957922Z Successfully installed ninja-1.10.2 2024-08-07T17:58:06.3735179Z + export PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2024-08-07T17:58:06.3737118Z + PATH=/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2024-08-07T17:58:06.3738288Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *aarch64* ]] 2024-08-07T17:58:06.3738755Z + install_tlparse 2024-08-07T17:58:06.3739093Z + pip_install --user tlparse==0.3.7 2024-08-07T17:58:06.3739564Z + pip install --progress-bar off --user tlparse==0.3.7 2024-08-07T17:58:06.9340839Z Collecting tlparse==0.3.7 2024-08-07T17:58:06.9535799Z Downloading tlparse-0.3.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (346 bytes) 2024-08-07T17:58:06.9633822Z Downloading tlparse-0.3.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.2 MB) 2024-08-07T17:58:08.2467510Z Installing collected packages: tlparse 2024-08-07T17:58:08.2960124Z Successfully installed tlparse-0.3.7 2024-08-07T17:58:08.3734252Z ++ python -m site --user-base 2024-08-07T17:58:08.3982366Z + PATH=/var/lib/jenkins/.local/bin:/var/lib/jenkins/.local/bin:/opt/cache/bin:/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin 2024-08-07T17:58:08.3983843Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *asan* ]] 2024-08-07T17:58:08.3984367Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *-debug* ]] 2024-08-07T17:58:08.3984862Z + [[ linux-focal-cuda12.1-py3.10-gcc9 != *-bazel-* ]] 2024-08-07T17:58:08.3985589Z + echo 'We are not in debug mode: linux-focal-cuda12.1-py3.10-gcc9. Expect the assertion to pass' 2024-08-07T17:58:08.3986430Z We are not in debug mode: linux-focal-cuda12.1-py3.10-gcc9. Expect the assertion to pass 2024-08-07T17:58:08.3988235Z + cd test 2024-08-07T17:58:08.3988669Z + python -c 'import torch; torch._C._crash_if_debug_asserts_fail(424242)' 2024-08-07T17:58:10.6133029Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]] 2024-08-07T17:58:10.6133527Z + [[ default == \n\o\g\p\u\_\A\V\X\5\1\2 ]] 2024-08-07T17:58:10.6138273Z + DYNAMO_BENCHMARK_FLAGS=() 2024-08-07T17:58:10.6138715Z + [[ default == *dynamo_eager* ]] 2024-08-07T17:58:10.6139108Z + [[ default == *aot_eager* ]] 2024-08-07T17:58:10.6139479Z + [[ default == *aot_inductor* ]] 2024-08-07T17:58:10.6139858Z + [[ default == *inductor* ]] 2024-08-07T17:58:10.6140206Z + [[ default == *dynamic* ]] 2024-08-07T17:58:10.6140558Z + [[ default == *cpu* ]] 2024-08-07T17:58:10.6143322Z + DYNAMO_BENCHMARK_FLAGS+=(--device cuda) 2024-08-07T17:58:10.6179706Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *libtorch* ]] 2024-08-07T17:58:10.6180251Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *-bazel-* ]] 2024-08-07T17:58:10.6183689Z + cd test 2024-08-07T17:58:10.6184074Z + python -c 'import torch; print(torch.__config__.show())' 2024-08-07T17:58:12.5393106Z PyTorch built with: 2024-08-07T17:58:12.5393485Z - GCC 9.4 2024-08-07T17:58:12.5393792Z - C++ Version: 201703 2024-08-07T17:58:12.5394515Z - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications 2024-08-07T17:58:12.5395932Z - Intel(R) MKL-DNN v3.4.2 (Git Hash 1137e04ec0b5251ca2b4400a4fd3c667ce843d67) 2024-08-07T17:58:12.5396565Z - OpenMP 201511 (a.k.a. OpenMP 4.5) 2024-08-07T17:58:12.5397041Z - LAPACK is enabled (usually provided by MKL) 2024-08-07T17:58:12.5397494Z - NNPACK is enabled 2024-08-07T17:58:12.5397871Z - CPU capability usage: AVX2 2024-08-07T17:58:12.5398291Z - CUDA Runtime 12.1 2024-08-07T17:58:12.5398775Z - NVCC architecture flags: -gencode;arch=compute_52,code=sm_52 2024-08-07T17:58:12.5399345Z - CuDNN 90.1 (built against CUDA 12.4) 2024-08-07T17:58:12.5399772Z - Magma 2.6.1 2024-08-07T17:58:12.5405894Z - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/cache/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Werror -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, FORCE_FALLBACK_CUDA_MPI=1, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.5.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=ON, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 2024-08-07T17:58:12.5412639Z 2024-08-07T17:58:12.8298282Z + cd test 2024-08-07T17:58:12.8298812Z + python -c 'import torch; print(torch.__config__.parallel_info())' 2024-08-07T17:58:14.7458206Z ATen/Parallel: 2024-08-07T17:58:14.7458685Z at::get_num_threads() : 8 2024-08-07T17:58:14.7459079Z at::get_num_interop_threads() : 8 2024-08-07T17:58:14.7459463Z OpenMP 201511 (a.k.a. OpenMP 4.5) 2024-08-07T17:58:14.7459852Z omp_get_max_threads() : 8 2024-08-07T17:58:14.7460542Z Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications 2024-08-07T17:58:14.7461277Z mkl_get_max_threads() : 8 2024-08-07T17:58:14.7461765Z Intel(R) MKL-DNN v3.4.2 (Git Hash 1137e04ec0b5251ca2b4400a4fd3c667ce843d67) 2024-08-07T17:58:14.7462385Z std::thread::hardware_concurrency() : 16 2024-08-07T17:58:14.7462818Z Environment variables: 2024-08-07T17:58:14.7463183Z OMP_NUM_THREADS : [not set] 2024-08-07T17:58:14.7463576Z MKL_NUM_THREADS : [not set] 2024-08-07T17:58:14.7463957Z ATen parallel backend: OpenMP 2024-08-07T17:58:14.7464235Z 2024-08-07T17:58:15.0286892Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *aarch64* ]] 2024-08-07T17:58:15.0287799Z + [[ default == *backward* ]] 2024-08-07T17:58:15.0288190Z + [[ default == *xla* ]] 2024-08-07T17:58:15.0288561Z + [[ default == *executorch* ]] 2024-08-07T17:58:15.0288932Z + [[ default == \j\i\t\_\l\e\g\a\c\y ]] 2024-08-07T17:58:15.0289380Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *libtorch* ]] 2024-08-07T17:58:15.0289847Z + [[ default == distributed ]] 2024-08-07T17:58:15.0290234Z + [[ default == *inductor_distributed* ]] 2024-08-07T17:58:15.0290665Z + [[ default == *inductor-halide* ]] 2024-08-07T17:58:15.0291102Z + [[ default == *inductor-micro-benchmark* ]] 2024-08-07T17:58:15.0291775Z + [[ default == *huggingface* ]] 2024-08-07T17:58:15.0292195Z + [[ default == *timm* ]] 2024-08-07T17:58:15.0292836Z + [[ default == *torchbench* ]] 2024-08-07T17:58:15.0293798Z + [[ default == *inductor_cpp_wrapper_abi_compatible* ]] 2024-08-07T17:58:15.0294408Z + [[ default == *inductor* ]] 2024-08-07T17:58:15.0294774Z + [[ default == *dynamo* ]] 2024-08-07T17:58:15.0295612Z + [[ linux-focal-cuda12.1-py3.10-gcc9 == *rocm* ]] 2024-08-07T17:58:15.0296050Z + [[ 3 == 1 ]] 2024-08-07T17:58:15.0296351Z + [[ 3 == 2 ]] 2024-08-07T17:58:15.0296660Z + [[ 3 -gt 2 ]] 2024-08-07T17:58:15.0296947Z + install_torchvision 2024-08-07T17:58:15.0297277Z + local orig_preload 2024-08-07T17:58:15.0297577Z + local commit 2024-08-07T17:58:15.0297901Z ++ get_pinned_commit vision 2024-08-07T17:58:15.0298287Z ++ cat .github/ci_commit_pins/vision.txt 2024-08-07T17:58:15.0310825Z + commit=d23a6e1664d20707c11781299611436e1f0c104f 2024-08-07T17:58:15.0311749Z + orig_preload= 2024-08-07T17:58:15.0312294Z + '[' -n '' ']' 2024-08-07T17:58:15.0312983Z + pip_install --no-use-pep517 --user git+https://github.com/pytorch/vision.git@d23a6e1664d20707c11781299611436e1f0c104f 2024-08-07T17:58:15.0314187Z + pip install --progress-bar off --no-use-pep517 --user git+https://github.com/pytorch/vision.git@d23a6e1664d20707c11781299611436e1f0c104f 2024-08-07T17:58:15.5259175Z Collecting git+https://github.com/pytorch/vision.git@d23a6e1664d20707c11781299611436e1f0c104f 2024-08-07T17:58:15.5266387Z Cloning https://github.com/pytorch/vision.git (to revision d23a6e1664d20707c11781299611436e1f0c104f) to /tmp/pip-req-build-8faba91z 2024-08-07T17:58:15.5294452Z Running command git clone --filter=blob:none --quiet https://github.com/pytorch/vision.git /tmp/pip-req-build-8faba91z 2024-08-07T17:58:17.2351176Z Running command git rev-parse -q --verify 'sha^d23a6e1664d20707c11781299611436e1f0c104f' 2024-08-07T17:58:17.2378276Z Running command git fetch -q https://github.com/pytorch/vision.git d23a6e1664d20707c11781299611436e1f0c104f 2024-08-07T17:58:18.9710462Z Running command git checkout -q d23a6e1664d20707c11781299611436e1f0c104f 2024-08-07T17:58:19.3162603Z Resolved https://github.com/pytorch/vision.git to commit d23a6e1664d20707c11781299611436e1f0c104f 2024-08-07T17:58:22.3148983Z Preparing metadata (setup.py) ... [?25l- \ done 2024-08-07T17:58:22.3214913Z [?25hRequirement already satisfied: numpy in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchvision==0.19.0a0+d23a6e1) (1.21.2) 2024-08-07T17:58:22.3220163Z Requirement already satisfied: torch in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchvision==0.19.0a0+d23a6e1) (2.5.0a0+git016588f) 2024-08-07T17:58:22.3228478Z Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torchvision==0.19.0a0+d23a6e1) (10.3.0) 2024-08-07T17:58:22.3563915Z Requirement already satisfied: filelock in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (3.13.1) 2024-08-07T17:58:22.3571558Z Requirement already satisfied: typing-extensions>=4.8.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (4.12.2) 2024-08-07T17:58:22.3576932Z Requirement already satisfied: networkx in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (2.8.8) 2024-08-07T17:58:22.3582183Z Requirement already satisfied: jinja2 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (3.1.4) 2024-08-07T17:58:22.3587487Z Requirement already satisfied: fsspec in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (2024.6.1) 2024-08-07T17:58:22.3604946Z Requirement already satisfied: sympy>=1.13.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from torch->torchvision==0.19.0a0+d23a6e1) (1.13.1) 2024-08-07T17:58:22.3652256Z Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from sympy>=1.13.0->torch->torchvision==0.19.0a0+d23a6e1) (1.3.0) 2024-08-07T17:58:22.4969566Z Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/envs/py_3.10/lib/python3.10/site-packages (from jinja2->torch->torchvision==0.19.0a0+d23a6e1) (2.1.5) 2024-08-07T17:58:22.5298333Z Building wheels for collected packages: torchvision 2024-08-07T18:00:10.6197578Z Building wheel for torchvision (setup.py) ... [?25l- \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | / - \ | done 2024-08-07T18:00:10.6280622Z [?25h Created wheel for torchvision: filename=torchvision-0.19.0a0+d23a6e1-cp310-cp310-linux_x86_64.whl size=2115982 sha256=e9c80c90842cf92d23df0247b32b1b8ba0c6a0295bebdbc3c01dad134f1be553 2024-08-07T18:00:10.6284567Z Stored in directory: /var/lib/jenkins/.cache/pip/wheels/0e/56/35/02931e71eb23fd2b85591c7ec05b733ca7c8b328a2fd151f96 2024-08-07T18:00:10.6331838Z Successfully built torchvision 2024-08-07T18:00:11.6467887Z Installing collected packages: torchvision 2024-08-07T18:00:12.2199404Z Successfully installed torchvision-0.19.0a0+d23a6e1 2024-08-07T18:00:12.3467925Z + '[' -n '' ']' 2024-08-07T18:00:12.3468273Z + test_python_shard 3 2024-08-07T18:00:12.3468607Z + [[ -z 5 ]] 2024-08-07T18:00:12.3469217Z + python test/run_test.py --exclude-jit-executor --exclude-distributed-tests --shard 3 5 --verbose 2024-08-07T18:00:12.4862719Z /var/lib/jenkins/workspace/test/run_test.py:21: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html 2024-08-07T18:00:12.4863764Z import pkg_resources 2024-08-07T18:00:17.2130501Z Downloading https://ossci-metrics.s3.amazonaws.com/slow-tests.json to /var/lib/jenkins/workspace/test/.pytorch-slow-tests.json 2024-08-07T18:00:17.2909561Z Downloading https://ossci-metrics.s3.amazonaws.com/disabled-tests-condensed.json to /var/lib/jenkins/workspace/test/.pytorch-disabled-tests.json 2024-08-07T18:00:17.3362043Z Ignoring disabled issues: [''] 2024-08-07T18:00:17.3503370Z Found test times from artifacts 2024-08-07T18:00:17.4049686Z Found test times from artifacts 2024-08-07T18:00:17.4069790Z Running 25% of tests based on TD 2024-08-07T18:00:17.4410453Z Running parallel tests on 2 processes 2024-08-07T18:00:17.4412866Z Name: tests to run (est. time: 71.12min) 2024-08-07T18:00:17.4413560Z Serial tests (0): 2024-08-07T18:00:17.4413883Z Parallel tests (24): 2024-08-07T18:00:17.4414661Z test_transformers 1/1 2024-08-07T18:00:17.4415052Z functorch/test_ops 2/9 2024-08-07T18:00:17.4415388Z functorch/test_ops 7/9 2024-08-07T18:00:17.4415732Z test_ops 2/11 2024-08-07T18:00:17.4416037Z test_ops 7/11 2024-08-07T18:00:17.4416328Z test_decomp 1/19 2024-08-07T18:00:17.4416757Z test_decomp 6/19 2024-08-07T18:00:17.4417223Z test_decomp 11/19 2024-08-07T18:00:17.4417538Z test_decomp 16/19 2024-08-07T18:00:17.4417866Z test_modules 2/2 2024-08-07T18:00:17.4418203Z test_nestedtensor 1/1 2024-08-07T18:00:17.4418549Z inductor/test_torchinductor 3/4 2024-08-07T18:00:17.4418953Z test_meta 1/5 2024-08-07T18:00:17.4419244Z test_meta 5/5 2024-08-07T18:00:17.4419602Z inductor/test_torchinductor_dynamic_shapes 3/4 2024-08-07T18:00:17.4420085Z inductor/test_cuda_cpp_wrapper 1/1 2024-08-07T18:00:17.4420483Z test_ops_jit 3/3 2024-08-07T18:00:17.4420801Z dynamo/test_skip_non_tensor 1/1 2024-08-07T18:00:17.4421206Z dynamo/test_interop 1/1 2024-08-07T18:00:17.4421566Z inductor/test_extension_backend 1/1 2024-08-07T18:00:17.4422003Z inductor/test_compiled_optimizers 1/1 2024-08-07T18:00:17.4422425Z export/test_tools 1/1 2024-08-07T18:00:17.4422842Z dynamo/test_inline_inbuilt_nn_modules 1/1 2024-08-07T18:00:17.4423285Z inductor/test_move_constructors_to_cuda 1/1 2024-08-07T18:00:17.4423740Z Name: excluded (est. time: 22.57min) 2024-08-07T18:00:17.4424126Z Serial tests (0): 2024-08-07T18:00:17.4424427Z Parallel tests (56): 2024-08-07T18:00:17.4424765Z test_sparse 1/1 2024-08-07T18:00:17.4425093Z inductor/test_cpu_repro 1/2 2024-08-07T18:00:17.4425448Z inductor/test_cpu_repro 2/2 2024-08-07T18:00:17.4425819Z test_schema_check 1/1 2024-08-07T18:00:17.4426160Z test_sparse_csr 1/1 2024-08-07T18:00:17.4426465Z test_masked 1/1 2024-08-07T18:00:17.4426833Z torch_np/numpy_tests/core/test_multiarray 1/1 2024-08-07T18:00:17.4427264Z dynamo/test_higher_order_ops 1/1 2024-08-07T18:00:17.4427665Z export/test_export 1/1 2024-08-07T18:00:17.4428036Z test_serialization 1/1 2024-08-07T18:00:17.4428383Z dynamo/test_aot_autograd_cache 1/1 2024-08-07T18:00:17.4428779Z test_jit_autocast 1/1 2024-08-07T18:00:17.4429195Z profiler/test_profiler 1/1 2024-08-07T18:00:17.4429593Z torch_np/numpy_tests/core/test_scalarmath 1/1 2024-08-07T18:00:17.4430041Z test_tensorboard 1/1 2024-08-07T18:00:17.4430399Z inductor/test_foreach 1/1 2024-08-07T18:00:17.4430749Z dynamo/test_backends 1/1 2024-08-07T18:00:17.4431140Z torch_np/numpy_tests/core/test_einsum 1/1 2024-08-07T18:00:17.4431578Z dynamo/test_compile 1/1 2024-08-07T18:00:17.4431939Z higher_order_ops/test_with_effects 1/1 2024-08-07T18:00:17.4432356Z inductor/test_torchbind 1/1 2024-08-07T18:00:17.4432770Z torch_np/numpy_tests/linalg/test_linalg 1/1 2024-08-07T18:00:17.4433187Z test_xnnpack_integration 1/1 2024-08-07T18:00:17.4433568Z export/test_torchbind 1/1 2024-08-07T18:00:17.4433951Z export/test_unflatten 1/1 2024-08-07T18:00:17.4434310Z dynamo/test_subgraphs 1/1 2024-08-07T18:00:17.4434681Z test_segment_reductions 1/1 2024-08-07T18:00:17.4435073Z inductor/test_ordered_set 1/1 2024-08-07T18:00:17.4435432Z test_indexing 1/1 2024-08-07T18:00:17.4435773Z dynamo/test_recompile_ux 1/1 2024-08-07T18:00:17.4436154Z torch_np/test_basic 1/1 2024-08-07T18:00:17.4436721Z nn/test_load_state_dict 1/1 2024-08-07T18:00:17.4437132Z torch_np/numpy_tests/lib/test_shape_base_ 1/1 2024-08-07T18:00:17.4437589Z dynamo/test_verify_correctness 1/1 2024-08-07T18:00:17.4437991Z dynamo/test_export_mutations 1/1 2024-08-07T18:00:17.4438383Z dynamo/test_sources 1/1 2024-08-07T18:00:17.4438736Z test_subclass 1/1 2024-08-07T18:00:17.4439056Z nn/test_lazy_modules 1/1 2024-08-07T18:00:17.4439417Z test_native_functions 1/1 2024-08-07T18:00:17.4439768Z dynamo/test_exc 1/1 2024-08-07T18:00:17.4440116Z profiler/test_profiler_tree 1/1 2024-08-07T18:00:17.4440619Z profiler/test_record_function 1/1 2024-08-07T18:00:17.4441019Z export/test_schema 1/1 2024-08-07T18:00:17.4441363Z test_itt 1/1 2024-08-07T18:00:17.4441680Z test_per_overload_api 1/1 2024-08-07T18:00:17.4442040Z export/test_lift_unlift 1/1 2024-08-07T18:00:17.4442420Z test_model_exports_to_core_aten 1/1 2024-08-07T18:00:17.4442835Z test_sparse_semi_structured 1/1 2024-08-07T18:00:17.4443221Z test_jit_llga_fuser 1/1 2024-08-07T18:00:17.4443589Z inductor/test_pattern_matcher 1/1 2024-08-07T18:00:17.4444007Z inductor/test_split_cat_fx_passes 1/1 2024-08-07T18:00:17.4444410Z inductor/test_snode_runtime 1/1 2024-08-07T18:00:17.4444791Z xpu/test_conv 1/1 2024-08-07T18:00:17.4445131Z inductor/test_cuda_repro 1/1 2024-08-07T18:00:17.4445571Z inductor/test_cudagraph_trees_expandable_segments 1/1 2024-08-07T18:00:17.4446056Z optim/test_lrscheduler 1/1 2024-08-07T18:00:17.4446556Z Starting test batch 'tests to run' 0.0 seconds after initiating testing 2024-08-07T18:00:17.4497760Z Running test_transformers 1/1 ... [2024-08-07 18:00:17.449344] 2024-08-07T18:00:17.4502859Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_transformers.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:00:17.449849] 2024-08-07T18:00:35.9030183Z 2024-08-07T18:00:35.9031479Z test_transformers 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_transformers_1.1_10084dc1b049f7b6_.log 2024-08-07T18:00:35.9032399Z Running 0 items in this shard: 2024-08-07T18:00:35.9032648Z 2024-08-07T18:00:35.9037243Z Running functorch/test_ops 2/9 ... [2024-08-07 18:00:35.903416] 2024-08-07T18:00:35.9042120Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'functorch/test_ops.py', '-m', 'serial', '--shard-id=2', '--num-shards=9', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:00:35.903857] 2024-08-07T18:00:44.4370912Z 2024-08-07T18:00:44.4372093Z functorch/test_ops 2/9 was successful, full logs can be found in artifacts with path test/test-reports/functorch.test_ops_2.9_31d3a02af24914a0_.log 2024-08-07T18:00:44.4373014Z Running 0 items in this shard: 2024-08-07T18:00:44.4373300Z 2024-08-07T18:00:44.4378152Z Running functorch/test_ops 7/9 ... [2024-08-07 18:00:44.437451] 2024-08-07T18:00:44.4392071Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'functorch/test_ops.py', '-m', 'serial', '--shard-id=7', '--num-shards=9', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:00:44.437939] 2024-08-07T18:00:52.9209493Z 2024-08-07T18:00:52.9210663Z functorch/test_ops 7/9 was successful, full logs can be found in artifacts with path test/test-reports/functorch.test_ops_7.9_1766e083f3ab9b5c_.log 2024-08-07T18:00:52.9211621Z Running 0 items in this shard: 2024-08-07T18:00:52.9211880Z 2024-08-07T18:00:52.9214309Z Running test_ops 2/11 ... [2024-08-07 18:00:52.921139] 2024-08-07T18:00:52.9219120Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops.py', '-m', 'serial', '--shard-id=2', '--num-shards=11', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:00:52.921573] 2024-08-07T18:01:07.8179845Z 2024-08-07T18:01:07.8180779Z test_ops 2/11 was successful, full logs can be found in artifacts with path test/test-reports/test_ops_2.11_f1fa1d6bfcf834f8_.log 2024-08-07T18:01:07.8181600Z Running 0 items in this shard: 2024-08-07T18:01:07.8181843Z 2024-08-07T18:01:07.8184461Z Running test_ops 7/11 ... [2024-08-07 18:01:07.818161] 2024-08-07T18:01:07.8189774Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops.py', '-m', 'serial', '--shard-id=7', '--num-shards=11', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:01:07.818603] 2024-08-07T18:01:22.6653642Z 2024-08-07T18:01:22.6654646Z test_ops 7/11 was successful, full logs can be found in artifacts with path test/test-reports/test_ops_7.11_258bfcd3a64223ff_.log 2024-08-07T18:01:22.6655545Z Running 0 items in this shard: 2024-08-07T18:01:22.6656014Z 2024-08-07T18:01:22.6658200Z Running test_decomp 1/19 ... [2024-08-07 18:01:22.665480] 2024-08-07T18:01:22.6663244Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_decomp.py', '-m', 'serial', '--shard-id=1', '--num-shards=19', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:01:22.665935] 2024-08-07T18:01:30.5480004Z 2024-08-07T18:01:30.5481083Z test_decomp 1/19 was successful, full logs can be found in artifacts with path test/test-reports/test_decomp_1.19_80f2be07e1945c8f_.log 2024-08-07T18:01:30.5481949Z Running 0 items in this shard: 2024-08-07T18:01:30.5482223Z 2024-08-07T18:01:30.5485311Z Running test_decomp 6/19 ... [2024-08-07 18:01:30.548219] 2024-08-07T18:01:30.5491000Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_decomp.py', '-m', 'serial', '--shard-id=6', '--num-shards=19', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:01:30.548722] 2024-08-07T18:01:38.4309826Z 2024-08-07T18:01:38.4311150Z test_decomp 6/19 was successful, full logs can be found in artifacts with path test/test-reports/test_decomp_6.19_7a2ea32614883937_.log 2024-08-07T18:01:38.4311980Z Running 0 items in this shard: 2024-08-07T18:01:38.4312256Z 2024-08-07T18:01:38.4315716Z Running test_decomp 11/19 ... [2024-08-07 18:01:38.431209] 2024-08-07T18:01:38.4320767Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_decomp.py', '-m', 'serial', '--shard-id=11', '--num-shards=19', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:01:38.431710] 2024-08-07T18:01:46.1629800Z 2024-08-07T18:01:46.1631202Z test_decomp 11/19 was successful, full logs can be found in artifacts with path test/test-reports/test_decomp_11.19_ba415fa6601c404d_.log 2024-08-07T18:01:46.1632324Z Running 0 items in this shard: 2024-08-07T18:01:46.1632643Z 2024-08-07T18:01:46.1634494Z Running test_decomp 16/19 ... [2024-08-07 18:01:46.163121] 2024-08-07T18:01:46.1639841Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_decomp.py', '-m', 'serial', '--shard-id=16', '--num-shards=19', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:01:46.163581] 2024-08-07T18:01:54.0454014Z 2024-08-07T18:01:54.0455477Z test_decomp 16/19 was successful, full logs can be found in artifacts with path test/test-reports/test_decomp_16.19_2af6651f83e467a6_.log 2024-08-07T18:01:54.0456586Z Running 0 items in this shard: 2024-08-07T18:01:54.0457050Z 2024-08-07T18:01:54.0460004Z Running test_modules 2/2 ... [2024-08-07 18:01:54.045610] 2024-08-07T18:01:54.0465319Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_modules.py', '-m', 'serial', '--shard-id=2', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:01:54.046109] 2024-08-07T18:02:00.4739809Z 2024-08-07T18:02:00.4741171Z test_modules 2/2 was successful, full logs can be found in artifacts with path test/test-reports/test_modules_2.2_ff763601b12f1bfe_.log 2024-08-07T18:02:00.4742001Z Running 0 items in this shard: 2024-08-07T18:02:00.4742270Z 2024-08-07T18:02:00.4745110Z Running test_nestedtensor 1/1 ... [2024-08-07 18:02:00.474218] 2024-08-07T18:02:00.4750990Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_nestedtensor.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:02:00.474676] 2024-08-07T18:02:06.4026592Z 2024-08-07T18:02:06.4027790Z test_nestedtensor 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_nestedtensor_1.1_4bff0340dfef71ef_.log 2024-08-07T18:02:06.4028737Z Running 0 items in this shard: 2024-08-07T18:02:06.4029005Z 2024-08-07T18:02:06.4032879Z Running inductor/test_torchinductor 3/4 ... [2024-08-07 18:02:06.402956] 2024-08-07T18:02:06.4038871Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_torchinductor.py', '-m', 'serial', '--shard-id=3', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:02:06.403477] 2024-08-07T18:02:16.7909822Z 2024-08-07T18:02:16.7911031Z inductor/test_torchinductor 3/4 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_torchinductor_3.4_563b8b34bc219cdf_.log 2024-08-07T18:02:16.7912066Z Running 0 items in this shard: 2024-08-07T18:02:16.7912315Z 2024-08-07T18:02:16.7915421Z Running test_meta 1/5 ... [2024-08-07 18:02:16.791224] 2024-08-07T18:02:16.7922237Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_meta.py', '-m', 'serial', '--shard-id=1', '--num-shards=5', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:02:16.791683] 2024-08-07T18:02:32.9917413Z 2024-08-07T18:02:32.9918372Z test_meta 1/5 was successful, full logs can be found in artifacts with path test/test-reports/test_meta_1.5_833b5079e16ce8ea_.log 2024-08-07T18:02:32.9919169Z Running 0 items in this shard: 2024-08-07T18:02:32.9919438Z 2024-08-07T18:02:32.9923155Z Running test_meta 5/5 ... [2024-08-07 18:02:32.991984] 2024-08-07T18:02:32.9928608Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_meta.py', '-m', 'serial', '--shard-id=5', '--num-shards=5', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:02:32.992469] 2024-08-07T18:02:49.2925360Z 2024-08-07T18:02:49.2926258Z test_meta 5/5 was successful, full logs can be found in artifacts with path test/test-reports/test_meta_5.5_02b6909cecf74fc4_.log 2024-08-07T18:02:49.2927122Z Running 0 items in this shard: 2024-08-07T18:02:49.2927367Z 2024-08-07T18:02:49.2930997Z Running inductor/test_torchinductor_dynamic_shapes 3/4 ... [2024-08-07 18:02:49.292754] 2024-08-07T18:02:49.2936493Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_torchinductor_dynamic_shapes.py', '-m', 'serial', '--shard-id=3', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:02:49.293248] 2024-08-07T18:02:57.4756780Z 2024-08-07T18:02:57.4759182Z inductor/test_torchinductor_dynamic_shapes 3/4 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_torchinductor_dynamic_shapes_3.4_3a3cab365abd6929_.log 2024-08-07T18:02:57.4760602Z Running 0 items in this shard: 2024-08-07T18:02:57.4760852Z 2024-08-07T18:02:57.4762930Z Running inductor/test_cuda_cpp_wrapper 1/1 ... [2024-08-07 18:02:57.475895] 2024-08-07T18:02:57.4768038Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_cuda_cpp_wrapper.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:02:57.476378] 2024-08-07T18:03:05.5530918Z 2024-08-07T18:03:05.5532256Z inductor/test_cuda_cpp_wrapper 1/1 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_cuda_cpp_wrapper_1.1_f4a036acdad6717e_.log 2024-08-07T18:03:05.5533142Z 2024-08-07T18:03:05.5536590Z Running test_ops_jit 3/3 ... [2024-08-07 18:03:05.553302] 2024-08-07T18:03:05.5541222Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '-m', 'serial', '--shard-id=3', '--num-shards=3', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:03:05.553774] 2024-08-07T18:03:11.2811000Z 2024-08-07T18:03:11.2812110Z test_ops_jit 3/3 was successful, full logs can be found in artifacts with path test/test-reports/test_ops_jit_3.3_65f233e182309ddc_.log 2024-08-07T18:03:11.2812952Z Running 0 items in this shard: 2024-08-07T18:03:11.2813199Z 2024-08-07T18:03:11.2817083Z Running dynamo/test_skip_non_tensor 1/1 ... [2024-08-07 18:03:11.281347] 2024-08-07T18:03:11.2822163Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'dynamo/test_skip_non_tensor.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:03:11.281855] 2024-08-07T18:03:14.9543678Z 2024-08-07T18:03:14.9545185Z dynamo/test_skip_non_tensor 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_skip_non_tensor_1.1_24e753f022ca7b5d_.log 2024-08-07T18:03:14.9546184Z Running 0 items in this shard: 2024-08-07T18:03:14.9546434Z 2024-08-07T18:03:14.9549283Z Running dynamo/test_interop 1/1 ... [2024-08-07 18:03:14.954588] 2024-08-07T18:03:14.9555247Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'dynamo/test_interop.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:03:14.955101] 2024-08-07T18:03:18.6278627Z 2024-08-07T18:03:18.6279998Z dynamo/test_interop 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_interop_1.1_91d0225ded58e1d4_.log 2024-08-07T18:03:18.6280932Z Running 0 items in this shard: 2024-08-07T18:03:18.6281213Z 2024-08-07T18:03:18.6284278Z Running inductor/test_extension_backend 1/1 ... [2024-08-07 18:03:18.628111] 2024-08-07T18:03:18.6290109Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_extension_backend.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:03:18.628620] 2024-08-07T18:03:26.3106800Z 2024-08-07T18:03:26.3108146Z inductor/test_extension_backend 1/1 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_extension_backend_1.1_0c1a11fe1311aff7_.log 2024-08-07T18:03:26.3109407Z Running 0 items in this shard: 2024-08-07T18:03:26.3109686Z 2024-08-07T18:03:26.3111690Z Running inductor/test_compiled_optimizers 1/1 ... [2024-08-07 18:03:26.310837] 2024-08-07T18:03:26.3116526Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_compiled_optimizers.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:03:26.311274] 2024-08-07T18:03:35.5508290Z 2024-08-07T18:03:35.5509487Z inductor/test_compiled_optimizers 1/1 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_compiled_optimizers_1.1_5139c0d6e7a9a7d5_.log 2024-08-07T18:03:35.5510962Z Running 0 items in this shard: 2024-08-07T18:03:35.5511235Z 2024-08-07T18:03:35.5514549Z Running export/test_tools 1/1 ... [2024-08-07 18:03:35.551144] 2024-08-07T18:03:35.5520312Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'export/test_tools.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:03:35.551690] 2024-08-07T18:03:39.3749466Z 2024-08-07T18:03:39.3750789Z export/test_tools 1/1 was successful, full logs can be found in artifacts with path test/test-reports/export.test_tools_1.1_3b88329f73a82780_.log 2024-08-07T18:03:39.3751717Z Running 0 items in this shard: 2024-08-07T18:03:39.3751965Z 2024-08-07T18:03:39.3754919Z Running dynamo/test_inline_inbuilt_nn_modules 1/1 ... [2024-08-07 18:03:39.375181] 2024-08-07T18:03:39.3760360Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'dynamo/test_inline_inbuilt_nn_modules.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:03:39.375674] 2024-08-07T18:03:46.0047914Z 2024-08-07T18:03:46.0049419Z dynamo/test_inline_inbuilt_nn_modules 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_inline_inbuilt_nn_modules_1.1_7b8541974312d1a3_.log 2024-08-07T18:03:46.0050508Z Running 0 items in this shard: 2024-08-07T18:03:46.0050755Z 2024-08-07T18:03:46.0053932Z Running inductor/test_move_constructors_to_cuda 1/1 ... [2024-08-07 18:03:46.005030] 2024-08-07T18:03:46.0058932Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_move_constructors_to_cuda.py', '-m', 'serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:03:46.005517] 2024-08-07T18:03:49.9492102Z 2024-08-07T18:03:49.9493356Z inductor/test_move_constructors_to_cuda 1/1 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_move_constructors_to_cuda_1.1_2a68e6ea6d2600c6_.log 2024-08-07T18:03:49.9494310Z 2024-08-07T18:03:49.9596496Z Running test_transformers 1/1 ... [2024-08-07 18:03:49.959234] 2024-08-07T18:03:49.9597827Z Running functorch/test_ops 2/9 ... [2024-08-07 18:03:49.959450] 2024-08-07T18:03:49.9604887Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'functorch/test_ops.py', '-m', 'not serial', '--shard-id=2', '--num-shards=9', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:03:49.960072] 2024-08-07T18:03:49.9607039Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_transformers.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:03:49.960138] 2024-08-07T18:08:21.3391757Z 2024-08-07T18:08:21.3392608Z PRINTING LOG FILE of test_transformers 1/1 (test/test-reports/test_transformers_1.1_2ac14b314d452749_.log) 2024-08-07T18:08:21.3546735Z Test results will be stored in test-reports/python-pytest/test_transformers/test_transformers-6a9eb05ef756150e.xml 2024-08-07T18:08:21.3548527Z ============================= test session starts ============================== 2024-08-07T18:08:21.3549539Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.5.0 -- /opt/conda/envs/py_3.10/bin/python 2024-08-07T18:08:21.3550711Z cachedir: .pytest_cache 2024-08-07T18:08:21.3551902Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2024-08-07T18:08:21.3553401Z rootdir: /var/lib/jenkins/workspace 2024-08-07T18:08:21.3554129Z configfile: pytest.ini 2024-08-07T18:08:21.3555728Z plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, xdist-3.3.1, xdoctest-1.1.0 2024-08-07T18:08:21.3557179Z collecting ... collected 45344 items 2024-08-07T18:08:21.3557948Z stepcurrent: Cannot find last run test, not skipping 2024-08-07T18:08:25.5122123Z Running 45344 items in this shard: test/test_transformers.py::TestSDPAPrivateUse1Only::test_fused_sdp_choice_privateuseone, test/test_transformers.py::TestSDPAPrivateUse1Only::test_scaled_dot_product_fused_attention_overrideable, test/test_transformers.py::TestSDPAPrivateUse1Only::test_scaled_dot_product_fused_attention_overrideable_backward, test/test_transformers.py::TestTransformersCUDA::test_bias_is_none_cuda, test/test_transformers.py::TestTransformersCUDA::test_decoder_only_layer_cuda, test/test_transformers.py::TestTransformersCUDA::test_decoder_padding_and_src_mask_bool_cuda, test/test_transformers.py::TestTransformersCUDA::test_disable_fastpath_cuda, test/test_transformers.py::TestTransformersCUDA::test_encoder_is_causal_cuda, test/test_transformers.py::TestTransformersCUDA::test_encoder_padding_and_src_mask_bool_cuda, test/test_transformers.py::TestTransformersCUDA::test_is_causal_gpu_cuda, test/test_transformers.py::TestTransformersCUDA::test_kpm_mask_trailing_column_with_nested_tensor_cuda, test/test_transformers.py::TestTransformersCUDA::test_mask_check_fastpath_cuda, test/test_transformers.py::TestTransformersCUDA::test_mha_native_args_nb_heads_1_bias_False_cuda, test/test_transformers.py::TestTransformersCUDA::test_mha_native_args_nb_heads_1_bias_True_cuda, test/test_transformers.py::TestTransformersCUDA::test_mha_native_args_nb_heads_8_bias_False_cuda, test/test_transformers.py::TestTransformersCUDA::test_mha_native_args_nb_heads_8_bias_True_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim2_key_padding_mask_dim1_bool_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim2_key_padding_mask_dim1_float32_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim2_key_padding_mask_dim_2_bool_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim2_key_padding_mask_dim_2_float32_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_2_key_padding_mask_dim1_bool_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_2_key_padding_mask_dim1_float32_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_2_key_padding_mask_dim_2_bool_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_2_key_padding_mask_dim_2_float32_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_3_key_padding_mask_dim1_bool_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_3_key_padding_mask_dim1_float32_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_3_key_padding_mask_dim_2_bool_cuda, test/test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_3_key_padding_mask_dim_2_float32_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_attn_mask_dropout_p_0_0_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_attn_mask_dropout_p_0_2_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_attn_mask_dropout_p_0_5_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_causal_attn_mask_dropout_p_0_0_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_causal_attn_mask_dropout_p_0_2_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_causal_attn_mask_dropout_p_0_5_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_attn_mask_dropout_p_0_0_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_attn_mask_dropout_p_0_2_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_attn_mask_dropout_p_0_5_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_causal_attn_mask_dropout_p_0_0_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_causal_attn_mask_dropout_p_0_2_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_causal_attn_mask_dropout_p_0_5_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_no_attn_mask_dropout_p_0_0_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_no_attn_mask_dropout_p_0_2_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_no_attn_mask_dropout_p_0_5_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_attn_mask_dropout_p_0_0_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_attn_mask_dropout_p_0_2_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_attn_mask_dropout_p_0_5_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_causal_attn_mask_dropout_p_0_0_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_causal_attn_mask_dropout_p_0_2_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_causal_attn_mask_dropout_p_0_5_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_attn_mask_dropout_p_0_0_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_attn_mask_dropout_p_0_2_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_attn_mask_dropout_p_0_5_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_causal_attn_mask_dropout_p_0_0_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_causal_attn_mask_dropout_p_0_2_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_causal_attn_mask_dropout_p_0_5_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_0_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_2_cuda, test/test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_5_cuda, test/test_transformers.py::TestTransformersCUDA::test_script_encoder_subclass_cuda, test/test_transformers.py::TestTransformersCUDA::test_script_mha_in_proj_weight_none_cuda, test/test_transformers.py::TestTransformersCUDA::test_self_attn_TxT_attn_mask_cuda, test/test_transformers.py::TestTransformersCUDA::test_train_with_is_causal_cuda, test/test_transformers.py::TestTransformersCUDA::test_train_with_pad_and_catch_error_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformer_bias_is_none_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_False_training_False_enable_nested_tensor_False_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_False_training_False_enable_nested_tensor_True_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_False_training_True_enable_nested_tensor_False_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_False_training_True_enable_nested_tensor_True_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_True_training_False_enable_nested_tensor_False_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_True_training_False_enable_nested_tensor_True_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_True_training_True_enable_nested_tensor_False_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_True_training_True_enable_nested_tensor_True_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_False_d_model_12_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_False_d_model_256_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_True_d_model_12_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_True_d_model_256_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_False_d_model_12_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_False_d_model_256_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_True_d_model_12_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_True_d_model_256_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_square_input_with_no_grad_False_training_False_enable_nested_tensor_False_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_square_input_with_no_grad_False_training_True_enable_nested_tensor_False_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_square_input_with_no_grad_True_training_False_enable_nested_tensor_False_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoder_square_input_with_no_grad_True_training_True_enable_nested_tensor_False_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_no_fastpath_with_hooks_nhead_3_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_no_fastpath_with_hooks_nhead_4_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_src_mask_nhead_1_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_src_mask_nhead_4_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_src_mask_nhead_8_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_subclass_cuda, test/test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_subclass_model_cuda, test/test_transformers.py::TestTransformersCUDA::test_with_nested_tensor_input_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_dispatch_fails_no_backend_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_atteention_large_bf16_nan_values_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_attention_fail_with_non_square_causal_attention_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_autocast_fp32_bfloat16_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_autocast_fp32_float16_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_193_dropout_p_0_0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_193_dropout_p_0_2_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_204_dropout_p_0_0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_204_dropout_p_0_2_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_256_dropout_p_0_0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_256_dropout_p_0_2_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_flash_fail_fp32_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_fused_kernels_nested_broadcasting_error_cases_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_fused_kernels_nested_broadcasting_requires_grad_failure_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_fused_kernels_seq_len_0_inputs_fused_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_fused_inputs_attn_mask_present_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_fused_inputs_broadcast_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_fused_inputs_dim_3_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_fused_inputs_head_dim_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_fused_inputs_invalid_dtype_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_1_dimensional_inputs_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_1_dimensional_inputs_kernel1_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_1_dimensional_inputs_kernel2_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_datatypes_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_datatypes_kernel1_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_datatypes_kernel2_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_devices_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_devices_kernel1_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_devices_kernel2_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_last_dim_stride_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_sequence_lengths_kernel0_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_mem_efficient_fail_bfloat16_less_than_sm80_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_nested_fails_on_padding_head_dim_cuda, test/test_transformers.py::TestSDPAFailureModesCUDA::test_unaligned_tensors_cuda, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_0_bfloat16_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_0_float16_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_0_float32_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_0_float64_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_7_bfloat16_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_7_float16_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_7_float32_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_7_float64_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_0_bfloat16_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_0_float16_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_0_float32_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_0_float64_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_7_bfloat16_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_7_float16_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_7_float32_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_7_float64_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_attention_math_with_negative_scale_kernel0_cuda, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_bfloat16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float16, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float32, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float64, test/test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_with_inf_cuda, test/test_transformers.py::TestSDPACUDA::test_sdp_math_gradcheck_contiguous_inputs_False_cuda, test/test_transformers.py::TestSDPACUDA::test_sdp_math_gradcheck_contiguous_inputs_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_cudnn_attention_different_dk_dv_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_cudnn_attention_fail_d128_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_backwards_throws_determinism_warning_fused_kernel0_warn_only_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_backwards_throws_determinism_warning_fused_kernel0_warn_only_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_query_dense_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_seq_len_1_inputs_fused_kernel0_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_sdp_choice_type_dense_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_sdp_choice_type_nested_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_long_sequence_mask_float16_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_long_sequence_mask_float32_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_non_contig_mask_bug_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_non_contiguous_mask_float16_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_non_contiguous_mask_float32_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_pad_mask_float16_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_pad_mask_float32_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_backwards_determinism_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_mask_variants_mask_dim_1_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_mask_variants_mask_dim_2_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_mask_variants_mask_dim_3_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_mask_variants_mask_dim_4_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_accuracy_type_dense_fused_kernel0_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_accuracy_type_nested_fused_kernel0_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_type_dense_is_contiguous_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_type_dense_is_contiguous_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_type_nested_is_contiguous_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_type_nested_is_contiguous_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_choice_with_determinism_warn_only_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_choice_with_determinism_warn_only_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_False_is_causal_False_bfloat16_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_False_is_causal_False_float16_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_False_is_causal_True_bfloat16_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_False_is_causal_True_float16_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_True_is_causal_False_bfloat16_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_True_is_causal_False_float16_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_True_is_causal_True_bfloat16_cuda_bfloat16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_True_is_causal_True_float16_cuda_float16, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_mem_efficient_grad_against_math_contiguous_inputs_False_is_causal_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_mem_efficient_grad_against_math_contiguous_inputs_False_is_causal_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_mem_efficient_grad_against_math_contiguous_inputs_True_is_causal_False_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_mem_efficient_grad_against_math_contiguous_inputs_True_is_causal_True_cuda, test/test_transformers.py::TestSDPACudaOnlyCUDA::test_singelton_head_dim_stride_ne_1_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_LOWER_RIGHT_shape0_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_LOWER_RIGHT_shape1_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_LOWER_RIGHT_shape2_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_LOWER_RIGHT_shape3_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_UPPER_LEFT_shape0_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_UPPER_LEFT_shape1_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_UPPER_LEFT_shape2_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_UPPER_LEFT_shape3_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_LOWER_RIGHT_shape0_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_LOWER_RIGHT_shape1_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_LOWER_RIGHT_shape2_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_LOWER_RIGHT_shape3_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_UPPER_LEFT_shape0_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_UPPER_LEFT_shape1_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_UPPER_LEFT_shape2_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_UPPER_LEFT_shape3_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_is_causal_and_mask_fails_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_is_causal_equals_upper_left_shape0_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_is_causal_equals_upper_left_shape1_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_is_causal_equals_upper_left_shape2_cuda, test/test_transformers.py::TestAttnBiasCUDA::test_is_causal_equals_upper_left_shape3_cuda 2024-08-07T18:08:29.5534805Z 2024-08-07T18:08:29.5539404Z test_transformers.py::TestSDPAPrivateUse1Only::test_fused_sdp_choice_privateuseone [1/2] c++ -MMD -MF open_registration_extension.o.d -DTORCH_EXTENSION_NAME=custom_device_extension -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1013\" -I/var/lib/jenkins/workspace/test/cpp_extensions -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/TH -isystem /opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/include/THC -isystem /opt/conda/envs/py_3.10/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -std=c++17 -g -c /var/lib/jenkins/workspace/test/cpp_extensions/open_registration_extension.cpp -o open_registration_extension.o 2024-08-07T18:08:29.5544946Z [2/2] c++ open_registration_extension.o -shared -L/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o custom_device_extension.so 2024-08-07T18:08:29.5548022Z SKIPPED [0.0011s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/132862 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 0%] 2024-08-07T18:08:29.5550457Z test_transformers.py::TestSDPAPrivateUse1Only::test_scaled_dot_product_fused_attention_overrideable PASSED [0.0133s] [ 0%] 2024-08-07T18:08:29.5552337Z test_transformers.py::TestSDPAPrivateUse1Only::test_scaled_dot_product_fused_attention_overrideable_backward PASSED [0.0223s] [ 0%] 2024-08-07T18:08:29.5553820Z test_transformers.py::TestTransformersCUDA::test_bias_is_none_cuda PASSED [0.0058s] [ 0%] 2024-08-07T18:08:29.5555273Z test_transformers.py::TestTransformersCUDA::test_decoder_only_layer_cuda SKIPPED [0.0003s] (Fairseq not found) [ 0%] 2024-08-07T18:08:29.5556953Z test_transformers.py::TestTransformersCUDA::test_decoder_padding_and_src_mask_bool_cuda SKIPPED [0.0003s] (not supported on pre-3.11 Python) [ 0%] 2024-08-07T18:08:29.5558494Z test_transformers.py::TestTransformersCUDA::test_disable_fastpath_cuda PASSED [0.3072s] [ 0%] 2024-08-07T18:08:29.5559860Z test_transformers.py::TestTransformersCUDA::test_encoder_is_causal_cuda PASSED [0.0064s] [ 0%] 2024-08-07T18:08:29.5561467Z test_transformers.py::TestTransformersCUDA::test_encoder_padding_and_src_mask_bool_cuda SKIPPED [0.0003s] (not supported on pre-3.11 Python) [ 0%] 2024-08-07T18:08:29.5563282Z test_transformers.py::TestTransformersCUDA::test_is_causal_gpu_cuda SKIPPED [0.0002s] (Platform does not supposrt fused SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.5565052Z test_transformers.py::TestTransformersCUDA::test_kpm_mask_trailing_column_with_nested_tensor_cuda PASSED [0.0442s] [ 0%] 2024-08-07T18:08:29.5566531Z test_transformers.py::TestTransformersCUDA::test_mask_check_fastpath_cuda PASSED [0.0165s] [ 0%] 2024-08-07T18:08:29.5567975Z test_transformers.py::TestTransformersCUDA::test_mha_native_args_nb_heads_1_bias_False_cuda PASSED [0.0050s] [ 0%] 2024-08-07T18:08:29.5569494Z test_transformers.py::TestTransformersCUDA::test_mha_native_args_nb_heads_1_bias_True_cuda PASSED [0.0050s] [ 0%] 2024-08-07T18:08:29.5571003Z test_transformers.py::TestTransformersCUDA::test_mha_native_args_nb_heads_8_bias_False_cuda PASSED [0.0043s] [ 0%] 2024-08-07T18:08:29.5572502Z test_transformers.py::TestTransformersCUDA::test_mha_native_args_nb_heads_8_bias_True_cuda PASSED [0.0235s] [ 0%] 2024-08-07T18:08:29.5574673Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim2_key_padding_mask_dim1_bool_cuda PASSED [0.0404s] [ 0%] 2024-08-07T18:08:29.5576767Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim2_key_padding_mask_dim1_float32_cuda PASSED [0.0041s] [ 0%] 2024-08-07T18:08:29.5578809Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim2_key_padding_mask_dim_2_bool_cuda PASSED [0.0099s] [ 0%] 2024-08-07T18:08:29.5580853Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim2_key_padding_mask_dim_2_float32_cuda PASSED [0.0040s] [ 0%] 2024-08-07T18:08:29.5582838Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_2_key_padding_mask_dim1_bool_cuda PASSED [0.0040s] [ 0%] 2024-08-07T18:08:29.5584764Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_2_key_padding_mask_dim1_float32_cuda PASSED [0.0040s] [ 0%] 2024-08-07T18:08:29.5586805Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_2_key_padding_mask_dim_2_bool_cuda PASSED [0.0040s] [ 0%] 2024-08-07T18:08:29.5588777Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_2_key_padding_mask_dim_2_float32_cuda PASSED [0.0041s] [ 0%] 2024-08-07T18:08:29.5590754Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_3_key_padding_mask_dim1_bool_cuda PASSED [0.0038s] [ 0%] 2024-08-07T18:08:29.5593088Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_3_key_padding_mask_dim1_float32_cuda PASSED [0.0039s] [ 0%] 2024-08-07T18:08:29.5595426Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_3_key_padding_mask_dim_2_bool_cuda PASSED [0.0041s] [ 0%] 2024-08-07T18:08:29.5597534Z test_transformers.py::TestTransformersCUDA::test_multiheadattention_fastpath_attn_mask_attn_mask_dim_3_key_padding_mask_dim_2_float32_cuda PASSED [0.0041s] [ 0%] 2024-08-07T18:08:29.5599555Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_attn_mask_dropout_p_0_0_cuda PASSED [0.0091s] [ 0%] 2024-08-07T18:08:29.5601851Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_attn_mask_dropout_p_0_2_cuda PASSED [0.0047s] [ 0%] 2024-08-07T18:08:29.5603648Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_attn_mask_dropout_p_0_5_cuda PASSED [0.0046s] [ 0%] 2024-08-07T18:08:29.5605656Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_causal_attn_mask_dropout_p_0_0_cuda PASSED [0.0100s] [ 0%] 2024-08-07T18:08:29.5607524Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_causal_attn_mask_dropout_p_0_2_cuda PASSED [0.0052s] [ 0%] 2024-08-07T18:08:29.5609429Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_2D_causal_attn_mask_dropout_p_0_5_cuda PASSED [0.0052s] [ 0%] 2024-08-07T18:08:29.5611236Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_attn_mask_dropout_p_0_0_cuda PASSED [0.0045s] [ 0%] 2024-08-07T18:08:29.5613057Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_attn_mask_dropout_p_0_2_cuda PASSED [0.0045s] [ 0%] 2024-08-07T18:08:29.5615094Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_attn_mask_dropout_p_0_5_cuda PASSED [0.0046s] [ 0%] 2024-08-07T18:08:29.5617095Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_causal_attn_mask_dropout_p_0_0_cuda PASSED [0.0049s] [ 0%] 2024-08-07T18:08:29.5619014Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_causal_attn_mask_dropout_p_0_2_cuda PASSED [0.0051s] [ 0%] 2024-08-07T18:08:29.5620885Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_3D_causal_attn_mask_dropout_p_0_5_cuda PASSED [0.0053s] [ 0%] 2024-08-07T18:08:29.5624265Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_no_attn_mask_dropout_p_0_0_cuda SKIPPED [0.0010s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131086 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 0%] 2024-08-07T18:08:29.5628807Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_no_attn_mask_dropout_p_0_2_cuda SKIPPED [0.0011s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131146 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 0%] 2024-08-07T18:08:29.5633406Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_3D_input_dim_no_attn_mask_dropout_p_0_5_cuda SKIPPED [0.0010s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131123 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 0%] 2024-08-07T18:08:29.5636430Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_attn_mask_dropout_p_0_0_cuda PASSED [0.0047s] [ 0%] 2024-08-07T18:08:29.5638224Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_attn_mask_dropout_p_0_2_cuda PASSED [0.0047s] [ 0%] 2024-08-07T18:08:29.5640153Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_attn_mask_dropout_p_0_5_cuda PASSED [0.0043s] [ 0%] 2024-08-07T18:08:29.5641933Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_causal_attn_mask_dropout_p_0_0_cuda PASSED [0.0051s] [ 0%] 2024-08-07T18:08:29.5643773Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_causal_attn_mask_dropout_p_0_2_cuda PASSED [0.0048s] [ 0%] 2024-08-07T18:08:29.5645726Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_2D_causal_attn_mask_dropout_p_0_5_cuda PASSED [0.0049s] [ 0%] 2024-08-07T18:08:29.5647545Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_attn_mask_dropout_p_0_0_cuda PASSED [0.0042s] [ 0%] 2024-08-07T18:08:29.5649253Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_attn_mask_dropout_p_0_2_cuda PASSED [0.0044s] [ 0%] 2024-08-07T18:08:29.5651119Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_attn_mask_dropout_p_0_5_cuda PASSED [0.0043s] [ 0%] 2024-08-07T18:08:29.5653052Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_causal_attn_mask_dropout_p_0_0_cuda PASSED [0.0047s] [ 0%] 2024-08-07T18:08:29.5655035Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_causal_attn_mask_dropout_p_0_2_cuda PASSED [0.0048s] [ 0%] 2024-08-07T18:08:29.5656888Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_4D_causal_attn_mask_dropout_p_0_5_cuda PASSED [0.0048s] [ 0%] 2024-08-07T18:08:29.5660213Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_0_cuda SKIPPED [0.0010s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/129853 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 0%] 2024-08-07T18:08:29.5664813Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_2_cuda SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131107 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 0%] 2024-08-07T18:08:29.5669374Z test_transformers.py::TestTransformersCUDA::test_scaled_dot_product_attention_4D_input_dim_no_attn_mask_dropout_p_0_5_cuda SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131179 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 0%] 2024-08-07T18:08:29.5672189Z test_transformers.py::TestTransformersCUDA::test_script_encoder_subclass_cuda PASSED [0.4210s] [ 0%] 2024-08-07T18:08:29.5673896Z test_transformers.py::TestTransformersCUDA::test_script_mha_in_proj_weight_none_cuda PASSED [0.0375s] [ 0%] 2024-08-07T18:08:29.5675629Z test_transformers.py::TestTransformersCUDA::test_self_attn_TxT_attn_mask_cuda SKIPPED [0.0003s] (4D mask not supported yet - activate when 4D mask supported) [ 0%] 2024-08-07T18:08:29.5677178Z test_transformers.py::TestTransformersCUDA::test_train_with_is_causal_cuda PASSED [0.0549s] [ 0%] 2024-08-07T18:08:29.5678977Z test_transformers.py::TestTransformersCUDA::test_train_with_pad_and_catch_error_cuda SKIPPED [0.0017s] (test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test) [ 0%] 2024-08-07T18:08:29.5680635Z test_transformers.py::TestTransformersCUDA::test_transformer_bias_is_none_cuda PASSED [0.0494s] [ 0%] 2024-08-07T18:08:29.5682335Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_False_training_False_enable_nested_tensor_False_cuda PASSED [0.0544s] [ 0%] 2024-08-07T18:08:29.5684344Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_False_training_False_enable_nested_tensor_True_cuda PASSED [0.0538s] [ 0%] 2024-08-07T18:08:29.5686286Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_False_training_True_enable_nested_tensor_False_cuda PASSED [0.0566s] [ 0%] 2024-08-07T18:08:29.5688233Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_False_training_True_enable_nested_tensor_True_cuda PASSED [0.0569s] [ 0%] 2024-08-07T18:08:29.5690279Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_True_training_False_enable_nested_tensor_False_cuda PASSED [0.0447s] [ 0%] 2024-08-07T18:08:29.5692273Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_True_training_False_enable_nested_tensor_True_cuda PASSED [0.0734s] [ 0%] 2024-08-07T18:08:29.5694285Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_True_training_True_enable_nested_tensor_False_cuda PASSED [0.0580s] [ 0%] 2024-08-07T18:08:29.5696906Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_batch_first_True_training_True_enable_nested_tensor_True_cuda PASSED [0.0572s] [ 0%] 2024-08-07T18:08:29.5699024Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_False_d_model_12_cuda PASSED [0.0591s] [ 0%] 2024-08-07T18:08:29.5701300Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_False_d_model_256_cuda PASSED [0.1022s] [ 0%] 2024-08-07T18:08:29.5703520Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_True_d_model_12_cuda PASSED [0.1653s] [ 0%] 2024-08-07T18:08:29.5705764Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_False_use_autocast_True_d_model_256_cuda PASSED [0.2985s] [ 0%] 2024-08-07T18:08:29.5708198Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_False_d_model_12_cuda PASSED [0.0601s] [ 0%] 2024-08-07T18:08:29.5710460Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_False_d_model_256_cuda PASSED [0.0902s] [ 0%] 2024-08-07T18:08:29.5712726Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_True_d_model_12_cuda PASSED [0.1619s] [ 0%] 2024-08-07T18:08:29.5715002Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_fastpath_use_torchscript_False_enable_nested_tensor_True_use_autocast_True_d_model_256_cuda PASSED [0.2971s] [ 0%] 2024-08-07T18:08:29.5717155Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_square_input_with_no_grad_False_training_False_enable_nested_tensor_False_cuda PASSED [0.0122s] [ 0%] 2024-08-07T18:08:29.5719224Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_square_input_with_no_grad_False_training_True_enable_nested_tensor_False_cuda PASSED [0.0113s] [ 0%] 2024-08-07T18:08:29.5721376Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_square_input_with_no_grad_True_training_False_enable_nested_tensor_False_cuda PASSED [0.0104s] [ 0%] 2024-08-07T18:08:29.5723449Z test_transformers.py::TestTransformersCUDA::test_transformerencoder_square_input_with_no_grad_True_training_True_enable_nested_tensor_False_cuda PASSED [0.0110s] [ 0%] 2024-08-07T18:08:29.5725321Z test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_no_fastpath_with_hooks_nhead_3_cuda PASSED [0.0040s] [ 0%] 2024-08-07T18:08:29.5727104Z test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_no_fastpath_with_hooks_nhead_4_cuda PASSED [0.0039s] [ 0%] 2024-08-07T18:08:29.5728758Z test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_src_mask_nhead_1_cuda PASSED [0.0057s] [ 0%] 2024-08-07T18:08:29.5730322Z test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_src_mask_nhead_4_cuda PASSED [0.0052s] [ 0%] 2024-08-07T18:08:29.5732008Z test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_src_mask_nhead_8_cuda PASSED [0.0052s] [ 0%] 2024-08-07T18:08:29.5733721Z test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_subclass_cuda PASSED [0.4515s] [ 0%] 2024-08-07T18:08:29.5735412Z test_transformers.py::TestTransformersCUDA::test_transformerencoderlayer_subclass_model_cuda PASSED [2.2421s] [ 0%] 2024-08-07T18:08:29.5736994Z test_transformers.py::TestTransformersCUDA::test_with_nested_tensor_input_cuda PASSED [0.0214s] [ 0%] 2024-08-07T18:08:29.5738437Z test_transformers.py::TestSDPAFailureModesCUDA::test_dispatch_fails_no_backend_cuda PASSED [0.0024s] [ 0%] 2024-08-07T18:08:29.5740072Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_atteention_large_bf16_nan_values_cuda SKIPPED [0.0003s] (Does not support flash attention) [ 0%] 2024-08-07T18:08:29.5742179Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_attention_fail_with_non_square_causal_attention_cuda SKIPPED [0.0002s] (Does not support flash attention) [ 0%] 2024-08-07T18:08:29.5744172Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_autocast_fp32_bfloat16_cuda SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.5746094Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_autocast_fp32_float16_cuda SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.5748161Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_193_dropout_p_0_0_cuda SKIPPED [0.0002s] (Does not support fused SDPA or not SM86+ hardware) [ 0%] 2024-08-07T18:08:29.5750399Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_193_dropout_p_0_2_cuda SKIPPED [0.0002s] (Does not support fused SDPA or not SM86+ hardware) [ 0%] 2024-08-07T18:08:29.5752660Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_204_dropout_p_0_0_cuda SKIPPED [0.0005s] (Does not support fused SDPA or not SM86+ hardware) [ 0%] 2024-08-07T18:08:29.5754966Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_204_dropout_p_0_2_cuda SKIPPED [0.0002s] (Does not support fused SDPA or not SM86+ hardware) [ 0%] 2024-08-07T18:08:29.5757209Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_256_dropout_p_0_0_cuda SKIPPED [0.0002s] (Does not support fused SDPA or not SM86+ hardware) [ 0%] 2024-08-07T18:08:29.5759406Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_backward_failure_sm86plus_head_dim_256_dropout_p_0_2_cuda SKIPPED [0.0002s] (Does not support fused SDPA or not SM86+ hardware) [ 0%] 2024-08-07T18:08:29.5761440Z test_transformers.py::TestSDPAFailureModesCUDA::test_flash_fail_fp32_cuda SKIPPED [0.0002s] (Does not support fused SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.5763213Z test_transformers.py::TestSDPAFailureModesCUDA::test_fused_kernels_nested_broadcasting_error_cases_cuda PASSED [0.0044s] [ 0%] 2024-08-07T18:08:29.5765252Z test_transformers.py::TestSDPAFailureModesCUDA::test_fused_kernels_nested_broadcasting_requires_grad_failure_cuda SKIPPED [0.0002s] (Fused SDPA was not built for this system) [ 0%] 2024-08-07T18:08:29.5767143Z test_transformers.py::TestSDPAFailureModesCUDA::test_fused_kernels_seq_len_0_inputs_fused_kernel0_cuda PASSED [0.0043s] [ 0%] 2024-08-07T18:08:29.5768909Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_fused_inputs_attn_mask_present_kernel0_cuda SKIPPED [0.0002s] (Does not support flash attention) [ 0%] 2024-08-07T18:08:29.5770753Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_fused_inputs_broadcast_kernel0_cuda PASSED [0.0021s] [ 0%] 2024-08-07T18:08:29.5772428Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_fused_inputs_dim_3_kernel0_cuda PASSED [0.0072s] [ 0%] 2024-08-07T18:08:29.5774371Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_fused_inputs_head_dim_kernel0_cuda SKIPPED [0.0003s] (Does not flash_attention fused scaled dot product attention) [ 0%] 2024-08-07T18:08:29.5776335Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_fused_inputs_invalid_dtype_kernel0_cuda PASSED [0.0020s] [ 0%] 2024-08-07T18:08:29.5777986Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_1_dimensional_inputs_kernel0_cuda PASSED [0.0017s] [ 0%] 2024-08-07T18:08:29.5779609Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_1_dimensional_inputs_kernel1_cuda PASSED [0.0017s] [ 0%] 2024-08-07T18:08:29.5781412Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_1_dimensional_inputs_kernel2_cuda PASSED [0.0018s] [ 0%] 2024-08-07T18:08:29.5783022Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_datatypes_kernel0_cuda PASSED [0.0017s] [ 0%] 2024-08-07T18:08:29.5784705Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_datatypes_kernel1_cuda PASSED [0.0017s] [ 0%] 2024-08-07T18:08:29.5786441Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_datatypes_kernel2_cuda PASSED [0.0018s] [ 0%] 2024-08-07T18:08:29.5788073Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_devices_kernel0_cuda PASSED [0.0017s] [ 0%] 2024-08-07T18:08:29.5789623Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_devices_kernel1_cuda PASSED [0.0017s] [ 0%] 2024-08-07T18:08:29.5791341Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_inputs_different_devices_kernel2_cuda PASSED [0.0018s] [ 0%] 2024-08-07T18:08:29.5792918Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_last_dim_stride_kernel0_cuda PASSED [0.0073s] [ 0%] 2024-08-07T18:08:29.5794541Z test_transformers.py::TestSDPAFailureModesCUDA::test_invalid_sequence_lengths_kernel0_cuda PASSED [0.0053s] [ 0%] 2024-08-07T18:08:29.5796605Z test_transformers.py::TestSDPAFailureModesCUDA::test_mem_efficient_fail_bfloat16_less_than_sm80_cuda PASSED [0.0053s] [ 0%] 2024-08-07T18:08:29.5798386Z test_transformers.py::TestSDPAFailureModesCUDA::test_nested_fails_on_padding_head_dim_cuda SKIPPED [0.0003s] (Fused SDPA was not built for this system) [ 0%] 2024-08-07T18:08:29.5800336Z test_transformers.py::TestSDPAFailureModesCUDA::test_unaligned_tensors_cuda SKIPPED [0.0002s] (Does not support fused SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.5802222Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_0_bfloat16_cuda_bfloat16 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5803993Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_0_float16_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5805821Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_0_float32_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5807670Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_0_float64_cuda_float64 SKIPPED [0.0020s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5809487Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_7_bfloat16_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5811406Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_7_float16_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5813389Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_7_float32_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5815211Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_dense_dropout_0_7_float64_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5817129Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_0_bfloat16_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5818947Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_0_float16_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5820726Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_0_float32_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5822663Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_0_float64_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5824473Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_7_bfloat16_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5826343Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_7_float16_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5828132Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_7_float32_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5829934Z test_transformers.py::TestSDPACUDA::test_fused_sdp_choice_cpu_type_nested_dropout_0_7_float64_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5831739Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_attention_math_with_negative_scale_kernel0_cuda PASSED [0.0026s] [ 0%] 2024-08-07T18:08:29.5834136Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5836965Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5839842Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_False_cuda_bfloat16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5842735Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5845551Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5848546Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5851499Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5854363Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5857209Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5860018Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5862812Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_False_cuda_float16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5865714Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5868532Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5871403Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5874227Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5876984Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float16_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5879822Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_False_cuda_float32 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5882650Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5885540Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_False_cuda_float32 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5888506Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_True_cuda_float32 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5891311Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5894077Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5897502Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5900333Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float32_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5903214Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5906178Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_0_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5908982Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_False_cuda_float64 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5911825Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_2_bool_mask_1_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5914607Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5917396Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_0_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5920313Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5923303Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_mask_vs_math_cpu_fused_kernel0_float64_batch_size_2_q_seq_len_267_kv_seq_len_514_n_head_3_head_dim_8_mask_dim_4_bool_mask_1_train_True_cuda_float64 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5926177Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5928900Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5931527Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5934077Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5936774Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5939415Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5942005Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5944717Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5947347Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5949891Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5952622Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5955268Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5957933Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5960613Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5963307Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5966029Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5968681Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5971282Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5973990Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5976624Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5979207Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5981847Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5984506Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5987068Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5989779Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5992423Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5995452Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.5998100Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6000878Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6003640Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6006276Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6008865Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6011585Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6014180Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6016766Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6019481Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6022106Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6024675Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6027301Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6029921Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6032498Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6035214Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6037907Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6040692Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6043263Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6045857Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6048518Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6051154Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6053767Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6056445Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6058999Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0022s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6061595Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6064269Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6066880Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6069504Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6072234Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6074773Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6077436Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6080237Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6082850Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6085497Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_bfloat16 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6088145Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6090670Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6093281Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_bfloat16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_bfloat16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6096410Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6099045Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6101751Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6104341Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6106881Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6109566Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6112144Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6114846Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6117675Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6120326Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6122847Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6125595Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6128191Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6130906Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6133554Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6136157Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6138715Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6141343Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6143918Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6146735Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6149347Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6151994Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6154782Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6157360Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6159912Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6162629Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6165214Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6167711Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6170416Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6173010Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6175659Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6178258Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6180896Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6183480Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6186088Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6188675Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6191449Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6194155Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6197201Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6199833Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6202497Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6205128Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6207855Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float16 SKIPPED [0.0017s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6210436Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6213051Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6215617Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6218241Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6220861Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6223473Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6226074Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6228854Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6231486Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6234082Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6236783Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6239346Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6241923Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6244634Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float16 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6247164Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6249737Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float16 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6252427Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6255000Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6257632Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6260255Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float16 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6262756Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float16_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float16 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6265413Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6268136Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6270809Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float32 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6273536Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6276085Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float32 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6278701Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6281390Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6283964Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6286578Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6289326Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6291914Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float32 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6294490Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6297686Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6300278Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6302968Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6305770Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6308455Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6311080Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6313707Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float32 SKIPPED [0.0022s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6316289Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6318981Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6321618Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6324153Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6326803Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6329386Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6331961Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6334708Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float32 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6337282Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6339767Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6342542Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6346089Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6348771Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6351400Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6354023Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6356602Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float32 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6359222Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6361810Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6364500Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6367130Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6369681Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6372331Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6374941Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6377541Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float32 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6380218Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6382909Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6385684Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6388245Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6390812Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6393435Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6396501Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6399108Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float32 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6401680Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6404343Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6406958Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6409619Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6412189Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6414727Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6417343Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6420193Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float32 SKIPPED [0.0017s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6422908Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float32 SKIPPED [0.0016s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6425574Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6428153Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6430754Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6433421Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float32_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float32 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6436023Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6438683Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6441306Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float64 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6443900Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6678030Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6681043Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6683530Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6685971Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6688832Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6691465Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6693898Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float64 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6697118Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6699593Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6702084Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6704530Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6706945Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6709392Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6711847Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6714284Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float64 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6716720Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6719217Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6721653Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6724085Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6726666Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6729233Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6731674Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6734125Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float64 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6736565Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6739011Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6741466Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6743909Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6746346Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_12_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6748782Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6751227Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6753676Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_False_cuda_float64 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6756147Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_16_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6758608Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6761131Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6763664Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6766089Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_1_head_dim_8_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6768572Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6771045Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6773496Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_False_cuda_float64 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6775912Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_16_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6778381Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6780837Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6783267Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6785694Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_1030_n_head_3_head_dim_8_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6788132Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6790566Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6793000Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_False_cuda_float64 SKIPPED [0.0022s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6795920Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_16_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6798542Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6801178Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6803590Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6806018Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_1_head_dim_8_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6808460Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6810898Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6813332Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_False_cuda_float64 SKIPPED [0.0018s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6815787Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_16_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6818245Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6820680Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_False_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6823100Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_False_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6825521Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_vs_math_cpu_fused_kernel0_float64_batch_size_2_seq_len_267_n_head_3_head_dim_8_causal_True_train_True_cuda_float64 SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6827460Z test_transformers.py::TestSDPACUDA::test_scaled_dot_product_fused_attention_with_inf_cuda SKIPPED [0.0015s] (Only runs on cpu) [ 0%] 2024-08-07T18:08:29.6830409Z test_transformers.py::TestSDPACUDA::test_sdp_math_gradcheck_contiguous_inputs_False_cuda SKIPPED [0.0010s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131625 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 0%] 2024-08-07T18:08:29.6834587Z test_transformers.py::TestSDPACUDA::test_sdp_math_gradcheck_contiguous_inputs_True_cuda SKIPPED [0.0012s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131255 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 0%] 2024-08-07T18:08:29.6837507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_cudnn_attention_different_dk_dv_cuda SKIPPED [0.0003s] (cuDNN Attention is not supported on this system) [ 0%] 2024-08-07T18:08:29.6839289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_cudnn_attention_fail_d128_cuda SKIPPED [0.0002s] (cuDNN Attention is not supported on this system) [ 0%] 2024-08-07T18:08:29.6841575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6844304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6846994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6849667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6852388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6855141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6857836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6860533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6863229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6865934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6868718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6871511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6874226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6876925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6879616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6882270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6884953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 0%] 2024-08-07T18:08:29.6887659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6890377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6893068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6896248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6898983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6901697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6904601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6907424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6910164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6912857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6915531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6918273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6921025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6923733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6926421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6929148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6931867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6934560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6937245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6940059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6942852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6945536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6948260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6950939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6953644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6956337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6959023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6961707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6964401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6967111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6969796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6972472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6975259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6978064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6980732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6983435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6986131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0006s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6988811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6991496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6994221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.6997510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7000198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7002883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7005576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7008257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7011128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7013916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7016592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7019312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7022050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7024718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7027394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7030104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7032776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7035424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7038111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7040883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7043656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7046425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7049218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7051934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0011s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7054622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7057328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7060074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7062795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7065508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7068200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7070870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7073568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0006s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7076259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7078948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7081739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7084538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7087224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7089942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7092639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7095801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7098559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7101293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7103983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7106679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7109366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7112061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7114774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7117638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7120527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7123204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7125910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7128636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7131333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7134061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7136804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7139473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7142151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7144825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7147520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7150220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7153155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7155941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7158611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7161332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7164035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7166714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7169402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7172137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7174798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7177472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7180198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7182893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7185570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7188320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7191125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7193823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7197065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7199787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7202464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7205148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7207850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7210504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7213185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7216093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7218816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7221480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7224292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7227103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7229792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7232500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7235397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7238112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7240801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7243475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7246322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7249112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7251823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7254537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7257230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7260016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7262864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7265559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7268256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7270960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7273656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7276328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7279007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7281712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7284413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7287108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7289800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7292491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7295745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7298707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7301529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7304257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7307124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7309862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7312531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7315241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0012s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7317922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7320651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7323373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7326066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7328768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7331507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7334270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7336953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7339623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7342316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7344981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7347674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7350355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7353018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7355713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7358409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7361097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7363741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7366491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7369274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7371956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7374681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7377375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7380067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7382758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7385502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7388183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7390886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7393611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7396776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7399461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7402387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7405188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7407845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7410528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7413241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7415913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7418631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7421350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7424035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7426719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7429409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7432063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7434765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7437551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7440322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7443014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7445717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7448437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7451091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7453801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7456515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7459195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7461869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7464557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7467231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7469891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7472651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7475435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7478124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7480802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7483462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7486128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7488821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7491535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7494216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7497363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7500042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7502732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7505394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7508191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7511023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7513705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7516379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7519091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7521814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7524498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7527180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7529876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7532555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7535207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7537858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7540530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7543293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7546068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7548763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7551411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7554100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7556765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7559424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7562113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7564809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7567495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7570177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7572859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7575581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7578369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7581144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7583845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7586532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7589223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7591888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7594588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7597747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7600420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7603089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7605777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7608469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7611143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7613933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7616729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7619443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7622175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7624854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7627533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7630237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7632930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7635601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7638269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7641014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7643710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7646386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7649172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7651966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7654635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7657320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7660044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7662717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7665387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7668140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7670789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7673486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7676172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7678852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7681535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7684303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7687097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7689756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7692435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7695570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7698310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7701029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7703726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7706401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7709086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7711775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7714461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7717169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7720061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7722872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7725547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7728235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7730916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7733570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7736250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7738936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7741631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7744298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7746987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7749683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7752360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7755090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7757854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7760542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7763213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7765890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7768590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7771300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7773976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7776651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7779338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7782045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7784726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7787400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7790138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7792902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7796023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7798720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7801395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7804072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7806717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7809398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7812052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7814752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7817452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7820144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7822833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7825654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7828479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7831154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7833878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7836593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7839279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7841968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7844664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7847383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7850077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7852762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7855434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7858093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7860847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7863596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7866289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7869007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7871693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7874338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7877019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7879724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7882398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7885084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7887800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7890496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7893162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7896395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7899238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7901943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7904650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7907346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7910022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7912724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7915463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7918137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7920887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7923592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7926240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7928893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7931648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7934442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7937117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7939808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7942507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7945195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7947887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7950559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7953233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7955918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7958616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7961272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7963941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7966725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7969504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7972175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7974853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7977557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7980235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7982910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7985586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7988278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7990933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7993561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7996686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.7999385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8002242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8005027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8007700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8010386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8013044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8015745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8018462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8021209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8023896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8026559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8029230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8031950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8034680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8037442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8040226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8042953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8045629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8048318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8051000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8053711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8056403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8059063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8061734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8064429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8067117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8069798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8072551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8075344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8078018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8080676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8083363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8086042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8088750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8091423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8094089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8097235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8099924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8102605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8105278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8108119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8110915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8113566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8116267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8118990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 1%] 2024-08-07T18:08:29.8121654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8124317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8127012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8129688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8132350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8135030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8137709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8140387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8143220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8146012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8148661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8151355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8154039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8156729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8159428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8162153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8164856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8167545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8170233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8172959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8175656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8178407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8181150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8183840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8186496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8189176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8191834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8194517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8197663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8200344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8203023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8205696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8208373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8211025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8213821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8216648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8219362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8222050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8224735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8227417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8230100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8232757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8235442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8238131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8240829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8243531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8246191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8248925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8251743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8254396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8257069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8259772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8262462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8265102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8267777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8270473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8273135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8275818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8278496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8281172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8283896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8286631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8289326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8292013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8294703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8297841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8300495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8303234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8305919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8308572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8311233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8313925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8316582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8319390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8322181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8324857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8327518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8330180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8332833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8335504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8338162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8340829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8343513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8346200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8348881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8351546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8354320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8357100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8359779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8362450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8365140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8367810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8370530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8373199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8375875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8378551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8381209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8383849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8386521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8390059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8392860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8395950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8398679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8401351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8404009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8406663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8409363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8412085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8414771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8417479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8420214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8422923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8425766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8428585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8431286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8434006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8435446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8436873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8438312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8439751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8441208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8442631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8444084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8445533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8446973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8448480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8450030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8451473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8452914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8454349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8455804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8457244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8458667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8460143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8461588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8463058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8464500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8465951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8467478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8469038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8470489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8471944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8473384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8474835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8476254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8477697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8479141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8480604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8482045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8483477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8484926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8486445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8487973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8489400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8490880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8492331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8493769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8495594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8497085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8498534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8499978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8501425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8502868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8504321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8505863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8507420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8508850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8510304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8511739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8513183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8514611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8516074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8517504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8518983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8520437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8521906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8523319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8524814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8526373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8527819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8529270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8530724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8532187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8533636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8535086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8536527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8537983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8539435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8540910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8542346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8543875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8545391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8546809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8548251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8549696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8551177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8552608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8554067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8555500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8556962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8558393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8559856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8561321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8562867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8564376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8565816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8567265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8568725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8570173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8571633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8573105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8574547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8575992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8577430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8578896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8580331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8581881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8583401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8584856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8586294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8587730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8589169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8590603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8592069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8593493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8594934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8596793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8598272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8599682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8601299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8602868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8604327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8605755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8607214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8608647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8610097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8611573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8613002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8614445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8615881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8617323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8618777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8620307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8621855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8623290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8624734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8626194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8627625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8629063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8630498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8631964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8633423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8634850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8636310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8637756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8639299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8640855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8642326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8643814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8645292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8646720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8648173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8649614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8651069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8652503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8653939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8655408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8656847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8658368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8659886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8661338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8662799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8664246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8665671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8667128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8668573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8670000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8671437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8672893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8674336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8675755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8677297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8678819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8680280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8681723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8683182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8684607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8686057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8687483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8688916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8690342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8691817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8693242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8694678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8696756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8698331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8699766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8701190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8702671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8704111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8705548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8706980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8708434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8709870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8711313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8712798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8714252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8715777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8717285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8718768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8720206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8721662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8723095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8724549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8725985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8727433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8728844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8730282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8731722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8733187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8734673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8736195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8737627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8739067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8740514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8741932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8743403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8744855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8746297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8747723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8749180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8750630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8752087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8753607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8755141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8756572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8758007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8759436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8760862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8762324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8763759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8765200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8766627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8768086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8769514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8770953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8772477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8774023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8775439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8776878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8778318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8779770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8781190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8782649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8784094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8785526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8786963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8788397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8789838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8791341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8792881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8794292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8796153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8797612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8799036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8800446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8801874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8803337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8804739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8806169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8807616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8809071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8810605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8812172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8813631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8815097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8816535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8817979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8819464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8820932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8822351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8823794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8825233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8826673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8828111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8829627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8831143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8832572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8834026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8835455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8836902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8838339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8839791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8841211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8842661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8844102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8845531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8846970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8848479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8850021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8851443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8852902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8854349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8855805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8857231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8858684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8860106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8861575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8863012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8864458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8865886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8867401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8868920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8870339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8871789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8873259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8874765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8876189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8877653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8879088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8880523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8881950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8883441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8884880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8886413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8887940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8889373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8890834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8892269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8893742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8895567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8897053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8898459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8899885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8901361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8902826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8904255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8905802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8907355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8908786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8910226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8911659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8913104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8914556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8916001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8917424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8918902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8920345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8921792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8923215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8924744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8926263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8927690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8929106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 2%] 2024-08-07T18:08:29.8930531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8931968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8933399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8934842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8936262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8937711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8939124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8940559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8941981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8943561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8945060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8946487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8947920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8949389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8950806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8952238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8953719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8955157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8956592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8958022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8959488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8960926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8962456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8963994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8965440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8966871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8968311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8969727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8971173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8972616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8974056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8975491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8976922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8978380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8979790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8981303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8982817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8984295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8985716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8987162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8988589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8990046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8991472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8992897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8994379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8996252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8997727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.8999150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9000716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9002259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9003689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9005119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9006573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9008000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9009438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9010868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9012314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9013745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9015172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9016621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9018044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9019611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9021108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9022548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9023985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9025455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9026873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9028314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9029750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9031200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9032614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9034063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9035492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9036914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9038425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9039923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9041361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9042791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9044249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9045658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9047095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9048529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9049954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9051360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9052813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9054278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9055713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9057210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9058731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9060183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9061609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9063066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9064521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9065986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9067413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9068856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9070274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9071721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9073151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9074600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9076103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9077622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9079054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9080477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9081931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9083360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9084819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9086250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9087698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9089113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9090547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9091974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9093422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9094946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9096913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9098328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9099756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9101216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9102625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9104061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9105502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9106938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9108339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9109767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9111192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9112637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9114153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9115685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9117097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9118541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9119997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9121414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9122849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9124289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9125730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9127143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9128586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9130025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9131458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9132971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9134524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9135962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9137397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9138824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9140247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9141684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9143092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9144550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9145975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9147421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9148836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9150262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9151755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9153284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9154708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9156142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9157573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9159018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9160424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9161844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9163296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9164753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9166190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9167617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9169061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9170565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9172101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9173514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9174974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9176403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9177824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9179227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9180661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9182092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9183498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9184952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9186377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9187815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9189304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9190827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9192245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9193693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9195514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9196975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9198392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9199843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9201303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9202714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9204153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9205604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9207103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9208633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9210175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9211596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9213021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9214441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9215890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9217303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9218766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9220194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9221625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9223049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9224462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9225894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9227398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9228928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9230339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9231768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9233203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9234671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9236082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9237521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9238960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9240405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9241821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9243269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9244703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9246259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9247768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9249178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9250616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9252041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9253477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9254912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9256352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9257786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9259210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9260626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9262078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9263504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9265021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9266545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9267969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9269418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9270839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9272289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9273713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9275190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9276612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9278044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9279461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9280909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9282312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9283817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9285350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9286779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9288198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9289613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9291060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9292481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9293908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9295747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9297220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9298653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9300090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9301496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9302937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9304482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9306040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9307451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9308881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9310339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9311754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9313197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9314626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9316084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9317488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9318966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9320388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9321827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9323329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9324846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9326278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9327703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9329137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9330545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9331986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9333417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9334835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9336257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9337707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9339125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9340545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9342029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9343552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9344970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9346415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9347835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9349241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9350678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9352086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9353504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9354916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9356357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9357769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9359193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9360678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9362771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9364166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9365610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9367049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9368474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9369903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9371325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9372763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9374192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9375637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9377066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9378511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9380022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9381557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9382974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9384414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9385863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9387287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9388699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9390130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9391570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9392978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9394412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9396251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9397730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9399256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9400797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9402218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9403672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9405091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9406542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9407967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9409400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9410841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9412257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9413704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9415142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9416604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9418094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9419663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9421085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9422509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9423925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9425365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9426815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9428244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9429666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9431083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9432529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9433947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9435377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9436893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9438425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9439823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9441260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9442689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9444127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9445528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9446985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9448409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9519449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9521390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9522866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9524297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9526040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9527680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9529135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9530624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9532051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9533484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9534895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9536339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9537794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9539214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9540626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9542082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9543510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9545014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9546520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9548465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9550034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9551551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9553051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9554687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9556195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9557965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9559512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9561233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9567700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9569194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9570785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9572319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9573737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9575148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9576577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9577993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9579409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9580817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9582228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9583643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9585059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9586464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9587856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9589263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9590897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9592416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9593834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9595825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9597287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9598693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9600097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9601506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9602921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9604323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9605736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9607185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9608610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9610178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9611771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9613181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9614625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9616084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9617508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9618924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9620367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9621762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9623178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9624611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9626040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9627460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9628949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9630471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9631888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9633311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9634747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9636172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9637582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9638996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9640425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9641845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9643258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9644686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9646103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9647587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9649102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9650498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9651923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9653344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9654799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9656198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9657623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9659049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9660488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9661894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9663334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9664771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 3%] 2024-08-07T18:08:29.9666290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9667800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9669207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9670630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9672049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9673459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9674880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9676318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9677795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9679205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9680617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9682054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9683468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9684989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9686494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9687914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9689339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9690745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9692166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9693585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9695544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9696997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9698425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9699843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9701290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9702688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9704262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9705805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9707214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9708625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9710024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9711457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9712864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9714276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9715754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9717179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9718592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9720011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9721406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9722832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9724328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9725841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9727261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9728682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9730129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9731528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9732960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9734448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9735911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9737320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9738752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9740158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9741585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9743063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9744552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9745994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9747409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9748829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9750229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9751665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9753090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9754507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9755932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9757448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9758886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9760315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9761810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9763352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9764786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9766222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9767673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9769103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9770559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9771990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9773431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9774856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9776307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9777710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9779138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9780635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9782171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9783579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9785025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9786463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9787891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9789322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9790749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9792197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9793637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9795488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9796957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9798402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9799970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9801534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9802957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9804412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9805882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9807323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9808755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9810188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9811633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9813045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9814483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9815973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9817434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9818928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9820451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9821877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9823325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9824747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9826208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9827631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9829064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9830490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9831906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9833360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9834792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9836241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9837733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9839272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9840686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9842122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9843548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9844985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9846428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9847853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9849259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9850676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9852123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9853537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9854964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9856475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9858004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9859410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9860841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9862274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9863716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9865134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9866589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9868065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9869513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9870930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9872361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9873807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9875327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9876866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9878289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9879734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9881164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9882588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9884012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9885483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9886910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9888339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9889755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9891204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9892631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9894117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9896078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9897538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9898994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9900416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9901851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9903287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9904755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9906197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9907634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9909066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9910530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9911945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9913539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9915136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9916594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9918025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9919441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9920883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9922317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9923754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9925166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9926622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9928049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9929485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9930901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9932402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9933917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9935324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9936782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9938202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9939644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9941061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9942505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9943926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9945369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9946797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9948239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9949650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9951176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9952671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9954095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9955518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9956951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9958373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9959826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9961271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9962695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9964116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9965539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9967000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9968426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9969936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9971438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9972889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9974326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9975784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9977211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9978645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9980107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9981528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9982969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9984393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9985930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9987339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9988838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9990349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9991793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9993206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9994650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9996500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9997961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:29.9999405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0000820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0002253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0003684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0005122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0006559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0008126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0009663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0011095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0012514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0013958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0015422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0016883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0018305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0019721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0021156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0022566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0023997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0025412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0026969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0028458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0029884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0031307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0032756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0034155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0035586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0037030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0038471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0039875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0041288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0042743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0044172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0045676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0047198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0048644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0050076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0051517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0052933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0054366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0055814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0057232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0058636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0060060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0061512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0062920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0064427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0065940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0067383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0068790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0070226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0071661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0073110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0074527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0075982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0077409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0078862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0080289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0081705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0083226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0084756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0086211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0087633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0089076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0090494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0091916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0093326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0094766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0096606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0098060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0099480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0100913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0102458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0103987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0105452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0106897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0108343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0109746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0111174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0112593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0114047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0115493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0116953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0118384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0119821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0121302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0122807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0124230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0125645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0127094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0128492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0129922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0131342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0132771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0134175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0135610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0137054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0138472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0139951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0141473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0142894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0144307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0145743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0147186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0148628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0010s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0150044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0151489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0152913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0154364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0155785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0157233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0158720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0160254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0161652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0163078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0164500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0165925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0167403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0168820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0170263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0171687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0173117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0174532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0175976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0177477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0178992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0180400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0181841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0183278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0184701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0186154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0187582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0189040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0190457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0191897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0193321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0194755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0196712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0198275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0199701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0201142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0202553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0203995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0205414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0206871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0208312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0209727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0211173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0212613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0214043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0215633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0217196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0218625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0220054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0221480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0222928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0224359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0225790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0227235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0228653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0230093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0231506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0232935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0234443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0235974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0237407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0238834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0240262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0241714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0243121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0244551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0245971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0247409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0248838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0250279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0251716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0253215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0005s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0254730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0256140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0257593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0259015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0260451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0261860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0263290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0264719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0266132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0267551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0268974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0270410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0271887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0273403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0274818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0276264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0277668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0279102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0280522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0281974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0283392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0284826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0286265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0287725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0289137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0290554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0292078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0293597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0295383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0296911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0298372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0299794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0301223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0302646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0304081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0305509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0306966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0308378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0309796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0311382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0312903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0314329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0315797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0317280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0318688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0320124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0321559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0323009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0324431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 4%] 2024-08-07T18:08:30.0325875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0327324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0328774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0330265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0331778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0333211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0334634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0336065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0337497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0338943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0340381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0341810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0343223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0344665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0346087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0347534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0349015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0350541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0351955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0353367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0354804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0356224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0357692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0359107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0360539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0361961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0363412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0364815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0366241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0367737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0369270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0370667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0372075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0373515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0374931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0376348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0377777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0379207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0380623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0382047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0383465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0384910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0386412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0387928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0389338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0390784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0392226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0393644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0395471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0396956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0398406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0399822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0401264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0402694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0404135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0406417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0408017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0409438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0410884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0412304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0413734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0415155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0416640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0418089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0419499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0420939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0422375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0423803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0425294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0426823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0428274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0429709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0431136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0432577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0434008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0435429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0436859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0438295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0439735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0441147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0442571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0444067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0445607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0447013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0448469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0449897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0451341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0452742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0454174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0455604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0457031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0458491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0459916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0461362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0462865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0464377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0465793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0467294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0468770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0470205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0471617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0473053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0474477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0475874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0477298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0478747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0480182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0481665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0483178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0484595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0486040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0487455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0488903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0490316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0491762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0493175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0494606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0496448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0497909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0499349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0500884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0502475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0503903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0505336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0506760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0508192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0509618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0511038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0512450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0513886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0515342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0516764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0518187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0519696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0521221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0522635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0525348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0528017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0530705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0533398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0536039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0538947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0541651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0544312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0548480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0551169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0553966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0556720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0559369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0562016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0565887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0568612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0571275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0573939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0576705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0579405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0582084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0586079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0588789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0591441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0594194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0597517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0600231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0602912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0606916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0609645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0612322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0615026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0617738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0620415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0623111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0626677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0629384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0632188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0635002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0637650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0640303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0643171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0646041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0648694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0651351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0654035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0656789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0659463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0662107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0664758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0667552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0670316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0673000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0675684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0678387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0681041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0683738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0686420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0689121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0691832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0694512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0697653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0700328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0703111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0705897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0708559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0711293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0713979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0716661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0719345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0722047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0724719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0727378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0730079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0732755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0735427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0738197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0740966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0743650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0746326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0749002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0751671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0754362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0757055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0759718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0762383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0765064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0767802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0770454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0773214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0775994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0778654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0781317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0783975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0786663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0789336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0792026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0794684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0797811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0800462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0803187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0805855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0808659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0811445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0814098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0816797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0819516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0822187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0824846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0827514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0830188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0832815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0835463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0838138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0840817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0843553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0846324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0848957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0851626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0854291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0856932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0859609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0862308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0864988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0867625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0870308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0873041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0875723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0878495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0881261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0883929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0886606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0889284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0891969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0894645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0897787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0900457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0903119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0905806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0908477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0911127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0913939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0916809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0919489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0922141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0924816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0927536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0930217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0932845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0935561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0938255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0940939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0943614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0946298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0949068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0951824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0954541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0957286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0959973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0962646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0965293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0967966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0970659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0973368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0976024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0978687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0981386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0984133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0986877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0989540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0992234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0994874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.0997967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1000654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1003334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1006011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1008680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1011346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1014027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1016728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1019375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1022187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1025001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1027647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1030312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1032962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1035636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1038280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1040930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1043556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1046213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1048886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1051524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1054204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1056999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1059788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1062445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1065127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1067910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1070592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1073256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1075935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1078596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1081291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1083956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1086634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1089317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1092056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1094768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1097884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1100574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1103268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1105925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1108611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1111279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1113935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1116646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1119310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1122000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1124662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1127416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1130203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1132884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1135579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1138250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1140919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1143620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1146271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1148937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1151575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1154235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1156894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1159542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1162276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1165026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1167671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1170331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1172990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1175653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1178300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1180943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1183609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1186273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1188955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1191611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1194250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1197539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1200330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1202988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1205664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1208349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1210997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1213647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1216406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1219088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1221735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1224396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1227054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1229702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1232434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1235180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1237831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1240497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1243154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1245786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1248436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1251136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1253828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1256476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1259142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1261810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1264466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1267202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1269954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1272654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1275310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1277967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1280631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1283287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1285945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1288590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1291261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1293908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1297026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1299720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1302511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1305359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1308010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1310671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1313314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1316036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1318690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1321343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1324011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1326676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1329340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1332026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1334679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1337353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1340098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1342834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1345463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1348144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1350791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1353420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1356071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1358764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1361392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1364021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1366705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1369436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1372081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1374797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1377529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1380191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1382900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1385596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 5%] 2024-08-07T18:08:30.1388260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1390937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1393597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1396741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1399413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1402097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1404782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1407432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1410207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1412982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1415666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1418313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1420988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1423673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1426328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1428958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1431611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1434268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1436920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1439587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1442269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1445035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1447846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1450522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1453237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1455958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1458645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1461303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1463995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1466735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1469421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1472109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1474803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1477496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1480229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1482975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1485661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1488353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1491038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1493720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1496881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1499590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1502281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1504947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1507691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1510399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1513106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1515971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1518781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1521479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1524168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1526853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1529521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1532216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1534931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1537620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1540325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1543016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1545699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1548343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1551100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1553878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1556569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1559248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1561947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1564608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1567298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1569966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1572648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1575338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1578017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1580672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1583347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1586156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1588922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1591594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1594298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1597462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1600126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1602788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1605459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1608146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1610796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1613509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1616220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1618901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1621750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1624525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1627187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1629870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1632524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1635197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1637870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1640591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1643288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1645957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1648634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1651348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1654052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1656810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1659595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1662296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1664978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1667716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1670395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1673097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1675770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1678444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1681116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1683795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1686505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1689172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1691948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1694725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1697838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1700523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1703221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1705916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1708597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1711295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1714018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1716742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1719432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1722125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1724813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1727652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1730462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1733153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1735814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1738509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1741204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1743860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1746546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1749250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1751908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1754581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1757270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1759983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1762740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1765508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1768162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1770849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1773527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1776187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1778856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1781554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1784262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1786938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1789621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1792322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1795299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1798211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1800996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1803672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1806342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1809002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1811672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1814375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1817089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1819742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1822434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1825118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1827798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1830448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1833210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1835992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1838673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1841408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1844104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1846805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1849498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1852175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1854856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1857552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1860290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1862974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1865648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1868411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1871148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1873812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1876491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1879206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1881878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1884540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1885967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1887421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1888831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1890253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1891721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1893147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1894660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1896565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1898036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1899475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1900941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1902360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1903798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1905229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1906674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1908087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1909526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1910964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1912369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1913921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1915472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1916958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1918380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1919823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1921264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1922704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1924120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1925559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1926981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1928428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1929848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1931287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1932808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1934341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1935775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1937200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1938651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1940081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1941553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1942978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1944427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1945842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1947268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1948687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1950123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1951651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1953148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1954581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1955998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1957446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1958863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1960292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1961766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1963222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1964632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1966070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1967552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1969011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1970507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1972046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1973470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1974899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1976335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1977765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1979198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1980622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1982076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1983479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1984921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1986356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1987780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1989265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1991572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1993012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1994420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1996277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1997743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.1999182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2000587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2002026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2003444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2004886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2006306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2007734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2009296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2010859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2012272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2013710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2015131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2016599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2018016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2019424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2020865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2022295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2023721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2025134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2026563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2028061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2029569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2030976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2032432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2033868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2035303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2036716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2038148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2039605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2041013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2042476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2043911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2045359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2046768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2048324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2049875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2051313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2052744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2054180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2055605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2057036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2058470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2059883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2061325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2062788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2064212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2065619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2067133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2068647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2070069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2071479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2072954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2074383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2075823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2077250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2078676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2080131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2081555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2083011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2084430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2085961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2087477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2088905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2090327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2091772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2093191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2094619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2096469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2097930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2099334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2100751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2102191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2103626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2105185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2106770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2108215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2109651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2111088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2112504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2113951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2115386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2116858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2118287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2119706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2121146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2122554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2124065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2125572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2127016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2128421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2129856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2131272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2132731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2134148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2135575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2136990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2138434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2139835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2141235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2142770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2144298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2145716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2147128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2148572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2149992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2151418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2152856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2154281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2155693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2157115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2158524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2159961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2161450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2162955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2164380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2165793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2167233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2168627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2170055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2171489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2172956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2174359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2175795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2177227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2178673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2180166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2181673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2183137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2184572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2186013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2187435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2188870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2190294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2191733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2193159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2194603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2196433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2197878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2199413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2200960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2202379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2203809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2205247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2206670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2208121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2209539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2210967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2212389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2213859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2215282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2216761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2226570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2228288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2229732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2231156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2232599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2234021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2235441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2236857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2238293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2239711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2241141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2242561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2243995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2245492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2247001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2248407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2249841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2251280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2252682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2254110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2255532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2256984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2258387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2259817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2261264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2262700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2264178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2265686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2267094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2268601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2270001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2271421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2272850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2274265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2275693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2277097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2278532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2279955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2281394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2282879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2284421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2285842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 6%] 2024-08-07T18:08:30.2287267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2288683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2290123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2291576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2292991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2294430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2296496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2297989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2299412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2300847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2302441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2303996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2305389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2306813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2308234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2309668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2311061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2312508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2313929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2315344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2316813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2318237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2319666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2321161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2322683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2324081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2325519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2326944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2328363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2329768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2331200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2332641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2334041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2335467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2336879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2338304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2339815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2341322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2342739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2344164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2345564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2346987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2348396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2349825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2351227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2352648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2354064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2355487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2356902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2358394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2359914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2361354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2362775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2364184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2365625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2367042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2368469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2369885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2371313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2372731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2374129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2375543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2377031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2378539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2379937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2381369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2382787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2384225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2385615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2387034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2388454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2389882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2391302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2392736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2394147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2396030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2397625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2399147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2400581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2402033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2403468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2404875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2406304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2407724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2409137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2410535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2411995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2413408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2414827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2416353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2417855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2419284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2420686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2422140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2423551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2424982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2426388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2427802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2429215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2430657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2432079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2433497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2435000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2436507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2437923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2439329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2440758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2442181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2443595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2444995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2446420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2447828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2449248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2450650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2452094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2453587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2455079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2456480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2457895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2459340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2460743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2462192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2463612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2465048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2466449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2467876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2469297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2470734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2472226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2473738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2475144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2476563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2477984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2479384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2480815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2482258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2483677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2485078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2486520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2487943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2489360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2490840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2492405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2493841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2495710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2497175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2498610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2500061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2501503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2502955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2504386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2505841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2507270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2508711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0004s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2510256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2511834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2513247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2514684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2516165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2517625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2519032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2520454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2521928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2523359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2524796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2526217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2527669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2529183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2530706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2532208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2533666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2535106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2536540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2537968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2539406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2540858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2542298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2543743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2545169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2546613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2548098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2549612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2551035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2552509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2553930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2555363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2556788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2558243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2559656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2561067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2562530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2563959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2565383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2566873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2568471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2569908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2571340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2572779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2574221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2575649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2577082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2578505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2579941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2581369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2582795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2584220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2585711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2587233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2588642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2590074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2591517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2592961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2594358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2596758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2598233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2599682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2601101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2602559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2603994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2605565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2607121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2608543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2609998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2611439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2612900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2614324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2615769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2617258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2618687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2620106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2621558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2623008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2624513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2626086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2627510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2628967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2630386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2631816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2633266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2634724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2636151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2637591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2639024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2640485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2641902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2643488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2645015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2646450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2647891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2649326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2650763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2652187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2653638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2655057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2656502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2657935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2659375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2660787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2662306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2663831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2665263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2666682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2668111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2669553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2670957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2672391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2673828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2675275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2676690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2678130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2679547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2681068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2682568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2684003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2685419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2686869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2688271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2689678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2691115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2692563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2693986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2695842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2697325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2698742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2700301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2701822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2703295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2704730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2706174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2707592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2709020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2710472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2711902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2713365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2714800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2716305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2717733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2719266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2720778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2722220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2723659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2725093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2726516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2727964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2729381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2730810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2732262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2733722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2735163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2736578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2738094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2739607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2741035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2742453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2743927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2745353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2746782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2748200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2749651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2751085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2752501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2753966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2755384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2756903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2758390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2759812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2761234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2762683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2764098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2765521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2766939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2768388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2769791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2771223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2772672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2774119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2775610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2777119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2778564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2779993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2781427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2782854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2784292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2785727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2787168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2788588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2790026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2791453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2792869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2794375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2796309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2797773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2799181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2800615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2802029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2803488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2804898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2806331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2807751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2809197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2810610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2812031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2813597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2815147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2816621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2818039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2819490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2820927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2822354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2823800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2825257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2826666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2828091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2829506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2830942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2832446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2833976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2835388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2836808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2838251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2839664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2841091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2842513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2843984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2845386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2846817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2848244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2849686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2851172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2852699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2854139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2855565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2856995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2858417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2859845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2861267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2862693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2864116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2865553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2866980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2868458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2869947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2871461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2872880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2874315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2875726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2877159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2878607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2880022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2881462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2882895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2884357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2885780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2887212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2888711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2890237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2891649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2893143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2894570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2896460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2897886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2899305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2900754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2902174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2903613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2905035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2906471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2907886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2909436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2910967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2912409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2913860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2915297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2916746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2918179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2919635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2921043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2922480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2923931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2925377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2926782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2928309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2929850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2931284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2932690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2934146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2935563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2937006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2938418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2939872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2941314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2942740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2944187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 7%] 2024-08-07T18:08:30.2945594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2947110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2948627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2950051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2951468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2952920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2954365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2955800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2957230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2958671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2960115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2961527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2962963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2964394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2966647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2968167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2969594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2971017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2972469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2973870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2975320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2976747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2978187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2979595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2981023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2982443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2983861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2985371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2986867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2988304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2989733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2991164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2992573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2994038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2995915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2997378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.2998797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3000235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3001649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3003040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3004632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3006160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3007591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3008994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3010465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3011872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3013304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3014722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3016187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3017610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3019054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3020468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3021876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3023388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3024922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3026351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3027764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3029206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3030632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3032061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3033481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3034982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3036413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3037835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3039275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3040710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3042206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3043693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3045140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3046562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3048008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3049410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3050838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3052263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3053700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3055120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3056549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3057977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3059416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3060898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3062430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3063908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3065358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3066791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3068201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3069632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3071054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3072468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3073867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3075322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3076752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3078173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3079655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3081173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3082591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3084015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3085440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3086852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3088283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3089688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3091114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3092521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3093960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3095785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3097245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3098788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3100356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3101761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3103192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3104621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3106034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3107443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3108845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3110283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3111700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3113122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3114541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3116003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3117501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3119007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3120403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3121848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3123278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3124715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3126132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3127564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3129019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3130434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3131871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3133300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3134767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3136261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3137794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3139209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3140647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3142055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3143476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3144913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3146336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3147766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3149173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3150609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3152040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3153462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3154967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3156486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3157900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3159320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3160732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3162163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3163575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3165019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3166438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3167902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3169343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3170750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3172170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3173651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3175191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3176583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3177991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3179403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3180851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3182242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3183664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3185101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3186529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3187929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3189341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3190769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3192266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3193767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3195576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3197048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3198474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3199903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3201307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3202743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3204170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3205609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3207020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3208437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3209865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3211395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3212923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3214335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3215779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3217223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3218658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3220065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3221500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3222907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3224326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3225739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3227177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3228574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3230098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3231629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3233042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3234488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3235905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3237360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3238809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3240235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3241649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3243068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3244490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3245914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3247312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3248721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3250226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3251711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3253127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3254609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3256051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3257444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3258866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3260285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3261713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3263110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3264551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3265968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3267396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3268871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3270359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3271786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3273202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3274647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3276054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3277478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3278892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3280296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3281683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3283104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3284544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3285952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3287435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3288925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3290354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3291745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3293161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3294598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3296452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3297891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3299314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3300728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3302168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3303576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3305023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3306567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3308119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3309516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3310930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3312358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3313766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3315197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3316645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3318075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3319486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3320911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3322313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3323736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3325247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3326750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3328149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3329584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3331006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3332399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3333819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3335261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3336725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3338120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3339544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3340966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3342396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3343874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3345423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3346824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3348257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3349660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3351058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3352485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3353902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3355342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3356740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3358171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3359593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3361006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3362483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3364002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3365499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3366917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3368324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3369758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3371172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3372577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3374007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3375440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3376878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3378294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3379717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3381201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3382718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3384106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3385537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3386997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3388429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3389822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3391229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3392664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3394077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3395768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3397192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3398618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3400152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3401678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3403073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3404494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3405909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3407350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3408745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3410151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3411584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3412974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3414399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3415807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3417273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3418658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3420143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3421643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3423065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3424463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3425887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3427282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3428709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3430105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3431498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3432929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3434364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3435779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3437177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3438692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3440202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3441627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3443033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3444488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3445909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3447332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3448749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3450153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3451579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3452973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3454406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3455815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3457320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3458801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3460216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3461626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3463061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3464473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3465891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3467352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3468792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3470184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3471587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3473030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3474469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3475965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3477457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3478889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3480310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3481735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3483135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3484584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3486022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3487442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3488841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3490255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3491693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3493086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3494613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3496362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3497808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3499219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3500643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3502044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3503467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3504886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3506313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3507719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3509154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3510555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3511954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3513506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3515057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3516521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3517933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3519355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3520763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3522160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3523549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3525005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3526414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3527824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3529226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3530631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3532181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3533659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3535096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3536507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3537952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3539355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3540777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3542194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3543635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3545056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3546486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3547908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3549342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3550739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3552225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3553734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3555166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3556582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3557989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3559406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3560816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3562230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3563622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3565066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3566482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3567896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3569295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3570788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3572301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3573692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3575132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3576547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3577973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3579370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3580801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3582205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3583630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3585053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3586480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3587881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3589364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3590850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3592239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3593666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3595349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3596793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3598195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 8%] 2024-08-07T18:08:30.3599622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3601030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3602434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3603831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3605283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3606697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3608234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3609755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3611166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3612599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3614007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3615448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3616893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3618337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3619746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3621161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3622561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3623991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3625392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3626882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3628382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3629792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3631208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3632610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3634035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3635457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3636873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3638279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3639706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3641118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3642537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3643928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3645444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3646950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3648372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3649773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3651189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3652629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3654029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3655473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3656875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3658301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3659689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3661105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3662506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3664011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3665499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3666914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3668308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3669720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3671132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3672521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3673946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3675378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3676783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3678180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3679617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3681029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3682441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3683917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3685441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3686850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3688251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3689680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3691075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3692499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3693907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3695570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3696996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3698421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3699819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3701231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3702771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3704376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3705771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3707196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3708610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3710021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3711434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3712835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3714262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3715672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3717131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3718538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3719960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3721449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3722954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3724366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3725786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3727190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3728576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3729977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3731389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3732819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3734231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3735648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3737070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3738493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3739966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3741461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3742881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3744330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3745744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3747180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3748604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3750037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3751512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3752931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3754391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3755825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3757250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3758739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3760258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3761672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3763090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3764522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3765964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3767433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3768848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3770288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3771710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3773152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3774585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3776014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3777529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3779056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3780455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3781885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3783304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3784783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3786197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3787635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3789063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3790488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3791918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3793340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3794791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3796664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3798228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3799633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3801069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3802497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3803924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3805350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3806786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3808212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3809631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3811042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3812465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3813898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3815397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3816990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3818416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3819874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3821288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3822721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3824135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3825598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3827006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3828425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3829834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3831254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3832661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3834142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3835669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3837089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3838509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3839917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3841354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3842766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3844180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3845590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3847034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3848459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3849890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3851303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3852726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3854310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3855830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3857263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3858694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3860146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3861562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3863000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3864435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3865874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3867283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3868716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3870136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3871575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3873059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3874579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3876019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3877446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3878881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3880288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3881736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3883168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3884618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3886043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3887483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3888910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3890332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3891825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3893333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3894802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3896487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3897944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3899354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3900794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3902204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3903623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3905067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3906512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3907914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3909336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3910885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3913114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3914522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3915962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3917452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3918877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3920305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3921716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3923159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3924582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3926033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3927438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3928876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3930376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3931890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3933300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3934716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3936181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3937576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3939004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3940424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3941859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3943250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3944675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3946111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3947544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3949018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3950529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3951952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3953405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3954820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3956244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3957728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3959164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3960594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3962011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3963461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3964908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3966340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3967837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3969356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3970774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3972205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3973623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3975071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3976507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3977924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3979348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3980763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3982210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3983622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3985072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3986564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3988111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3989519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3990953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3992379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3993820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3995525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3996973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3998406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.3999824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4001245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4002665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4004093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4005662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4007187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4008585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4010016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4011438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4012853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4014251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4015704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4017169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4018572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4019998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4021421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4022855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4024331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4025872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4027294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4028735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4030142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4031567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4032972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4034413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4035846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4037274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4038683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4040097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4041502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4042975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4044482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4045913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4047328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4048733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4050166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4051580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4053034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4054441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4055888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4057304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4058711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4060125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4061632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4063153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4064553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4066003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4067472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4068916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4070319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4071747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4073159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4074585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4075996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4077419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4078823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4080309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4081812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4083215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4084642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4086059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4087489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4088891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4090326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4091748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4093157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4094575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4096272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4097700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4099235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4100769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4102180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4103620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4105026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4106455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4107854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4109281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4110674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4112082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4113492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4114942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4116379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4117804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4119283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4120829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4122232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4123632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4125085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4126501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4127923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4129338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4130776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4132200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4133638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4135067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4136502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4138013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4139501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4140927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4142337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4143845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4145266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4146695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4148113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4149542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4150938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4152371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4153777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4155231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4156705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4158208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4159618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4161040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4162470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4163876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4165348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4166775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4168200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4169615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4171064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4172482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4173903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4175409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4176920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4178326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4179723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4181145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4182553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4183986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4185418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4186839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4188247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4189684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4191093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4192516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4194002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4195821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4197240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4198672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4200104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4201523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4202948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4204365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4205818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4207233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4208662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4210080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4211501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4213046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4214569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4215986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4217470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4218899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4220321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4221724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4223141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4224575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4225981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4227404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4228819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4230246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4231716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4233217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4234630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4236071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4237476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4238910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4240336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4241758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4243183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4244615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4246044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4247460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4248868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4250338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4251850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4253256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 9%] 2024-08-07T18:08:30.4254676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4256085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4257513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4258909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4260323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4261743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4263144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4264596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4266007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4267437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4268933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4270476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4271875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4273304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4274748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4276194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4277596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4279026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4280441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4281852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4283311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4284740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4286170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4287661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4289162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4290568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4291998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4293410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4294856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4296539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4297981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4299408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4300829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4302238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4303663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4305124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4306654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4308207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4309622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4311061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4312471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4313908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4315335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4316817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4318224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4319640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4321049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4322477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4323891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4325393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4326925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4328342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4329755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4331161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4332596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4334007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4335421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4336826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4338254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4339668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4341093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4342501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4343909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4345431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4346913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4348334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4349739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4351166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4352564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4353977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4355388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4356812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4358204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4359630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4361032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4362470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4363939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4365422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4366858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4368338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4369777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4371186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4372649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4374076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4375499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4376909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4378348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4379782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4381208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4382719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4384221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4385656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4387056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4388482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4389895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4391338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4392773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4394196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4395881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4397348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4398757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4400176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4401715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4403281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4404672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4406080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4407520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4408938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4410356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4411766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4413224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4414638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4416056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4417510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4418933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4420478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4421975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4423394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4424811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4426250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4427647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4429066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4430473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4431913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4433326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4434742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4436157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4437588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4439058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4440561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4441972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4443437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4444842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4446248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4447676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4449097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4450525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4451929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4453379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4454788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4456198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4457667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4459177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4460584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4461989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4463414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4464824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4466253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4467656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4469079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4470485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4471920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4473344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4474764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4476254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4477784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4479186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4480612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4482030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4483490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4484892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4486321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4487752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4489160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4490576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4491983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4493421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4494924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4496723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4498118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4499537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4500949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4502367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4503777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4505183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4506676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4508064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4509482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4510896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4512324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4513717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4515289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4516849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4518279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4519675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4521100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4522515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4523916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4525324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4526710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4528132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4529545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4530950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4532373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4533874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4535360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4536764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4538156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4539591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4540999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4542418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4543826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4545234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4546660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4548061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4549493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4550903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4552422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4553931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4555351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4556753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4558182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4559571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4560987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4562408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4563819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4565225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4566621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4568052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4569461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4570951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4572452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4573884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4575304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4576726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4578123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4579551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4580969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4582417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4583822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4585240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4586682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4588082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4589583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4591071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4592517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4593909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4595581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4597006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4598432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4599828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4601258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4602685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4604095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4605510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4606903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4608464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4609996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4611414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4612839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4614299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4615728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4617184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4618597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4620045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4621463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4622917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4624333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4625737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4627255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4628755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4630167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4631631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4633156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4634559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4635974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4637383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4638821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4640213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4641628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4643065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4644468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4645879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4647357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4648867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4650278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4651696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4653123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4654549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4655961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4657379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4658773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4660187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4661597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4663020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4664402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4665879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4667386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4668830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4670245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4671652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4673101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4674489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4675902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4677312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4678743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4680141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4681568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4683001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4684496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4686005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4687410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4688845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4690268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4691686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4693133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4694557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4696240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4697664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4699060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4700496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4701904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4703485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4705010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4706472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4707902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4709304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4710727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4712134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4713587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4714993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4716411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4717873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4719319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4720718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4722268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4723778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4725247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4726672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4728084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4729506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4730910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4732323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4733739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4735163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4736578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4737997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4739393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4740908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4742401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4743818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4745222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4746633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4748057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4749442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4750857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4752285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4753720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4755119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4756547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4757952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4759453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4760963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4762378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4763774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4765182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4766593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4767984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4769405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4770818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4772230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4773625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4775053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4776490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4777891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4779355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4780873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4782294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4783714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4785126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4786535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4787969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4789379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4790802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4792230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4793676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4795310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4796752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4798277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4799807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4801191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4802626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4804042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4805450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4806861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4808272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4809692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4811104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4812556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4813955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4815377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4816907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4818396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4819787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4821210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4822652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4824043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4825461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4826877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4828300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4829690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4831113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4832552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4833968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4835429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4836949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4838332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4839755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4841151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4842579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4843978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4845385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4846798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4848188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4849611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4851027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4852460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4853926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4855438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4856843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4858257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4859663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4861082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4862515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4863913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4865343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4866743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4868162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4869560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4870963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4872995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4874538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4875922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4877335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4878743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4880176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4881567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4883011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4884426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4885840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4887252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4888652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4890077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4891571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4893084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4894482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4896162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4897597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4899020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4900413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4901832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4903266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4904644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 10%] 2024-08-07T18:08:30.4906056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4907463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4908890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4910278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4911821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4913377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4914799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4916191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4917651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4919051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4920477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4921867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4923280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4924705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4926120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4927535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4928922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4930423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4931922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4933351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4934753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4936174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4937568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4938976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4940371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4941783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4943205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4944594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4946009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4947403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4948892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4950355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4951760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4953192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4954628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4956022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4957429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4958836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4960274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4961665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4963090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4964524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4965932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4967420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4968953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4970365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4971773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4973209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4974591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4976004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4977413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4978832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4980222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4981625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4983082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4984471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4985970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4987499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4988960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4990382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4991826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4993293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4994748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4996445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4997915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.4999353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5000820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5002255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5003700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5005260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5006857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5008295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5009715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5011173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5012626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5014068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5015496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5017003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5018447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5019888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5021322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5022781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5024302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5025815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5027258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5028699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5030164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5031587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5033062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5034506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5035980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5037402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5038855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5040288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5041735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5043255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5044793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5046223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5047664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5049109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5050531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5051974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5053437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5060389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5061990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5063471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5064934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5066375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5067987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5069585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5071024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5072469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5073902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5075350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5076790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5078237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5079689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5081130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5082567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5083988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5085422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5086938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5088470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5089891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5091334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5092767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5094216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5095910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5097380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5098837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5100307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5101734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5103185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5104622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5106212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5107774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5109224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5110689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5112145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5113588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5115014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5116463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5117952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5119411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5120839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5122294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5123728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5125243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5126768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5128194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5129677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5131103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5132545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5133976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5135443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5136871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5138316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5139779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5141248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5142670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5144196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5145720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5147161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5148608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5150067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5151510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5152945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5154390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5155809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5157259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5158693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5160155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5161575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5163103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5164624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5166057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5167491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5168931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5170382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5171804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5173250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5174679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5176127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5177549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5179017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5180449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5181975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5183540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5184978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5186401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5187859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5189293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5190713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5192170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5193605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5195288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5196741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5198198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5199653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5201217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5202747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5204200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5205643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5207095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5208518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5210006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5211459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5212889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5214338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5215781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5217292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5218717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5220268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5221784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5223237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5224655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5226102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5227533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5228988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5230430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5231872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5233299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5234734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5236184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5237603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5239125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5240652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5242085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5243508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5244961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5246392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5247833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5249275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5250728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5252158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5253594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5255025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5256444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5257958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5259480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5260913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5262344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5263810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5265225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5266667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5268159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5269636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5271048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5272487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5273924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5275362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5276886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5278395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5279870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5281308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5282754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5284175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5285628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5287072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5288506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5289954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5291404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5292837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5294264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5296049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5297598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5299045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5300484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5301924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5303350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5304800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5306224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5307658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5309085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5310555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5311978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5313420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5314929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5316528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5317990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5319424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5320898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5322337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5323784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5325219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5326658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5328166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5329604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5331030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5332475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5333986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5335580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5336994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5338440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5339890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5341306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5342744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5344183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5345620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0004s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5347025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5348462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5349924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5351371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5352857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5354383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5355805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5357262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5358680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5360139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5361555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5362979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5364397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5365799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5367240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5368677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5370126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5371611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5373132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5374558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5375985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5377405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5378852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5380312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5381735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5383171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5384602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5386056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5387488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5388928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5390458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5392000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5393421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5394864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5396603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5398045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5399455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5400912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5402336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5403760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5405196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5406623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5408060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5409627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5411252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5412671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5414123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5415561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5417036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5418458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5419926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5421361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5422794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5424218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5425657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5427106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5428606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5430158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5431582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5433036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5434449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5435878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5437306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5438760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5440195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5441628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5443057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5444517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5445930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5447445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5448990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5450451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5451890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5453320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5454769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5456207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5457657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5459082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5460560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5462005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5463450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5464873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5466379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5467906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5469315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5470775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5472210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5473668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5475090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5476538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5477963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5479408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5480841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5482282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5483699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5485232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5486729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5488141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5489585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5491038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5492470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5493889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5495589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5497040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5498471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5499895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5501344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5502759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5504317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5505876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5507319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5508747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5510168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5511598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5513019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5514465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5515870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5517346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5518781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5520262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5521674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5523194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5524710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5526164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5527586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5529034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5530541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5531985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5533431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5534849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5536282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5537711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5539149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5540583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5542104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5543630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5545061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5546477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5547933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5549361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5550797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5552241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5553671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5555119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5556541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5558045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5559475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5561037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5562540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5563980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5565412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5566875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5568367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5569819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5571274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 11%] 2024-08-07T18:08:30.5572696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5574124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5575543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5577000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5578434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5579948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5581469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5582908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5584337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5585778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5587195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5588637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5590082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5591514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5592923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5594354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5596106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5597544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5599098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5600652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5602106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5603526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5604971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5606385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5607825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5609255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5611986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5614659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5617390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5620086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5622750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5625552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5628350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5631031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5635191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5637889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5640593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5643277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5645956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5648654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5652007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5655464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5658149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5660837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5663657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5666438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5669139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5673051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5675788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5678514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5681228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5683907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5686611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5689309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5693054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5696039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5698751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5701581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5704382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5707068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5710160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5713749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5716413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5719157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5721847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5724538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5727206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5730067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5733189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5735873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5738647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5741411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5744080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5746743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5749441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5752115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5754789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5757449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5760114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5762769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5765437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5768108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5770763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5773541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5776334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5778996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5781639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5784320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5787021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5789698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5792391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5795366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5798066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5800763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5803442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5806140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5808939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5811715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5814344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5817055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5819780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5822442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5825117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5827790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5830486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5833128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5835783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5838464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5841145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5843918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5846691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5849345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5852045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5854734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5857396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5860063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5862775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5865451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5868166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5870840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5873550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5876201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5878926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5881679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5884346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5887013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5889676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5892349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5895296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5897986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5900681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5903346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5906027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5908703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5911348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5914145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5916963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5919670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5922328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5924996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5927708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5930378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5933025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5935702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5938383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5941047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5943699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5946385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5949123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5951874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5954545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5957217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5959893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5962554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5965202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5967851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5970543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5973252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5975913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5978600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5981290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5984031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5986783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5989456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5992151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5994833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.5997803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6000489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6003144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6005802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6008452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6011136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6013825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6016487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6019299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6022793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6025508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6028193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6030877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6033573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6036263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6038957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6041646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6044337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6047112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6049846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6052534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6055306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6058110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6060802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6063486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6066179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6068903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6071566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6074272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6076973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6079677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6082374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6085057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6087746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6090523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6093314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6096263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6098970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6101701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6104384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6107078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6109788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6112510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6115208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6117932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6120666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6123366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6126175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6128987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6131679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6134378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6137061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6139730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6142429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6145127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6147842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6150519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6153218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6155906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6158586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6161363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6164155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6166873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6169659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6172332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6175011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6177713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6180428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6183116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6185801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6188577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6191264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6193965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6197150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6199987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6202662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6205324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6207988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6210692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6213377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6216049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6218806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6221513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6224177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6226851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6229539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6232321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6235104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6237802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6240528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6243230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6245939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6248627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6251333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6254051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6256743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6259467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6262166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6264854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6267634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6270389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6273073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6275757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6278459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6281131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6283809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6286530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6289229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6291924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6294603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6297601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6300297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6303106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6305940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6308648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6311356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6314081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6316783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6319540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6322253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6324958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6327630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6330350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6333058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6335728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6338493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6341283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6343973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6346633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6349357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6352048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6354725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6357405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6360126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6362801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6365496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6368215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6370954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6373786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6376560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6379225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6381912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6384619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6387339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6390011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6392682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6395648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6398332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6400995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6403684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6406395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6409190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6411974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6414634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6417370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6420061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6422732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6425414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6428123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6430804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6433505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6436211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6438932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6441656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6444459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6447237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6449950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6452675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6455373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6458063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6460785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6463473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6466141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6468888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6471595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6474286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6476960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6479776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6482542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6485228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6487910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6490624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6493296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6496230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6498948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6501653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6504354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6507052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6509735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6512417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6515244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6518071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6520748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6523427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6526146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6528810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6531465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6534153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6536855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6539536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6542208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6544903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6547587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6550324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6553177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6555855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6558553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6561244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6563910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6566573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6569277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6572020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6574699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6577390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6580116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6582798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6585543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6588309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6591023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6593702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6596635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6599340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6602008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6604692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6607372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6610062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6612753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6615434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6618120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6620920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6623727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6626430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6629131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6631823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6634519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6637177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6639856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6642549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6645273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6647978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6650664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6653325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6656082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6658842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6661497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6664170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6666881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6669536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6672224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6674901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6677597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6680274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6682940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6685617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6688304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6691057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6693826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6696772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6699504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6702213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6704866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6707544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6710244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6712934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6715603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6718334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6721014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6723671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6726457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6729304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6731987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6734657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6737304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6739989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6742671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6745361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6748031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6750702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6753389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6756076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6758762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6761518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6764305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6766982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6769723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6772423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6775118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 12%] 2024-08-07T18:08:30.6777820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6780498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6783178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6785858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6788510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6791192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6793859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6796982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6799802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6802466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6805129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6807829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6810500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6813161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6815876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6818647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6821312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6823994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6826708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6829409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6832177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6834992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6837678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6840366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6843061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6845764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6848437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6851122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6853813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6856449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6859128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6861827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6864495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6867308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6870115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6872809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6875483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6878157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6880831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6883524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6886219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6888911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6891603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6894302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6897242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6899937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6902736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6905554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6908241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6910917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6913604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6916285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6919024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6921693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6924360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6927072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6929755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6932444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6935127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6937906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6940657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6943327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6946058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6948749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6951411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6954071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6956746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6959422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6962099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6964821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6967510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6970191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6972977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6975710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6977145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6978579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6980029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6981439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6982876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6984305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6985743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6987156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6988599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6990055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6991488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6992974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6994497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6996159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6997594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.6999044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7000486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7001937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7003358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7004805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7006232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7007687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7009118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7010566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7012102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7013658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7015063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7016480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7017976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7019425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7020857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7022280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7023729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7025159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7026597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7028026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7029491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7030999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7032525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7033938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7035393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7036855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7038269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7039731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7041169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7042634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7044053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7045500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7046928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7048372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7049892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7051412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7052839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7054290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7055709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7057132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7058572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7060033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7061478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7062893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7064339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7065774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7067200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7068742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7070300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7071752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7073184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7074602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7076048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7077475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7078890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7080354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7081769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7083211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7084621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7086048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7087539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7089062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7090479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7091908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7093336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7094779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7096444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7097894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7099357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7100795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7102238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7103664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7105116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7106671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7108232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7109671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7111130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7112573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7114011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7115426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7116863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7118339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7119767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7121199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7122634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7124079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7125565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7127087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7128507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7129988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7131413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7132850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7134268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7135711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7137126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7138557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7140012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7141445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7142873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7144361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7145943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7147376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7148814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7150261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7151694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7153108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7154534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7155955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7157393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7158820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7160273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7161681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7163169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7164698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7166099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7167531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7168958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7170426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7171835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7173268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7174704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7176149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7177565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7179009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7180452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7181976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7183496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7184915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7186349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7187773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7189207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7190619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7192064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7193496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7194919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7196596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7198052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7199481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7201021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7202541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7203969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7205408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7206822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7208265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7209707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7211166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7212589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7214021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7215448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7216907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7218377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7219914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7221424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7222860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7224262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7225670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7227121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7228542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7229994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7231414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7232855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7234279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7235714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7237121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7238625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7240157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7241576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7242984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7244426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7245856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7247262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7248698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7250153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7251595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7253003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7254448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7255856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7257376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7258859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7260301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7261710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7263144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7264558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7265961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7267395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7268822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7270264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7271675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7273124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7274544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7275968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7277460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7279005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7280450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7281885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7283312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7284820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7286253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7287672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7289102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7290541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7291983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7293382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7294808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7296586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7298169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7299570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7301001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7302425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7303869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7305280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7306719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7308141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7309575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7311013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7312422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7313861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7315365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7316884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7318340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7319804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7321240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7322669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7324087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7325526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7326946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7328355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7329809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7331227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7332665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7334164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7335678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7337093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7338538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7339972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7341402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7342822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7344273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7345672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7347096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7348521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7349983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7351415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7352900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7354472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7355898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7357330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7358751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7360206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7361626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7363049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7364448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7365882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7367304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7368765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7370222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7371717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7373241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7374645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7376080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7377501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7378937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7380359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7381793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7383210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7384646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7386049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7387483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7388900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7390423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7391947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7393353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7394783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7396471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7397892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7399293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7400748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7402173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7403588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7404993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7406432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7407841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7409357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7410908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7412327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7413777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7415197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7416633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7418100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7419558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7421000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7422435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7423866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7425323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7426734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7428904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7430443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7431878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7433302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7434734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7436169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7437592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7439020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7440444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7441893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7443325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7444757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7446170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7447745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7449264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7450675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7452114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7453549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7455005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7456418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7457861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7459298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7460754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7462171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7463614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7465029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7466547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7468050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7469476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7470905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7472335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7473765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7475182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7476621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7478047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7479475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7480913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7482361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7483777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7485270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7486762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7488198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7489616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7491062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7492484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7493900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7495614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7497058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7498476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7499890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7501354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7502744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7504157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7505680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7507227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7508622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7510047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7511486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7512906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7514322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7515731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7517166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7518631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7520066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7521499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7522940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7524443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7525963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7527376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 13%] 2024-08-07T18:08:30.7528814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7530252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7531702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7533122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7534545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7535980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7537383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7538815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7540232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7541694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7543181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7544697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7546109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7547548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7548959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7550383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7551820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7553247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7554667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7556071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7557505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7558934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7560360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7561865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7563387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7564804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7566232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7567650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7569075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7570491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7571935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7573331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7574735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7576171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7577573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7578996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7580486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7582025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7583421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7584842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7586259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7587690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7589084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7590507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7591947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7593360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7594788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7596461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7597902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7599438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7600969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7602388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7603816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7605228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7606639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7608033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7609461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7610883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7612314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7613719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7615136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7616564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7618086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7619601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7621015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7622461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7623864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7625284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7626694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7628136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7629544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7630974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7632394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7633823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7635242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7636725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7638231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7639636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7641053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7642455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7643889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7645302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7646719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7648128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7649560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7650990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7652420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7653821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7655315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7656828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7658228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7659644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7661077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7662525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7663923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7665349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7666766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7668252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7669662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7671116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7672519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7673923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7675409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7676907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7678330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7679749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7681193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7682593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7684019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7685436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7686845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7688248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7689687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7691128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7692530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7694031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7695773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7697227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7698635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7700074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7701515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7702958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7704378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7705794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7707202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7708636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7710034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7711472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7712993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7714517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7715937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7717379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7718818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7720233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7721678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7723092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7724542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7725971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7727405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7728834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7730283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7731823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7733350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7734769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7736195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7737656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7739074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7740523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7741972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7743417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7744823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7746260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7747689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7756933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7758672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7760263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7761692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7763151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7764577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7766008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7767444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7768906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7770336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7771752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7773209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7774659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7776099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7777597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7779145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7780581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7782017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7783450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7784889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7786315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7787747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7789169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7790595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7792048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7793468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7794902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7796773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7798342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7799769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7801215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7802645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7804080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7805490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7806927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7808346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7809816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7811237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7812659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7814102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7815609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7817121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7818589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7820075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7821484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7822906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7824313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7825759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7827181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7828596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7830033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7831483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7832896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7834395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7835911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7837337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7838795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7840235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7841671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7843098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7844551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7845976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7847409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7848843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7850325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7851739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7853247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7854751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7856164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7857592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7859012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7860459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7861892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7863330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7864739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7866181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7867613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7869053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7870470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7871983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7873502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7874917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7876356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7877796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7879258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7880686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7882137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7883563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7885017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7886444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7887890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7889332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7890854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7892349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7893779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7895481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7896943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7898374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7899814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7901268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7902699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7904123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7905546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7906995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7908417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7910003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7911572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7913012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7914441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7915885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7917303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7918763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7920239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7921663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7923095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7924519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7925965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7927363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7928865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7930387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7931825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7933234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7934664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7936078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7937519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7938993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7940479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7941932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7943370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7944808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7946224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7947751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7949272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7950725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7952154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7953611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7955045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7956480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7957906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7959350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7960791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7962201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7963642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7965063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7966608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7968178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7969639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7971065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7972522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7973932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7975361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7976789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7978242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7979668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7981090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7982543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7983971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7985476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7986988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7988429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7989883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7991317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7992732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7994161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7995911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7997380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.7998795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8000274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8001712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8003122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8004686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8006228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8007689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8009094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8010563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8011990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8013488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8014904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8016356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8017827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8019290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8020739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8022158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8023690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8025212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8026640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8028066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8029519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8030963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8032384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8033803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8035261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8036684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8038119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8039542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8040999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8042501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8044058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8045489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8046923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8048375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8049784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8051235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8052665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8054125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8055541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8056984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8058420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8059884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8061380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8062897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8064314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8065738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8067170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8068580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8070027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8071459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8072897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8074311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8075756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8077193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8078619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8080125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8081651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8083077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8084506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8085921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8087352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8088794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8090230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8091678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8093100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8094551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8096234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8097692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8099238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8100814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8102215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8103633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8105053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8106485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8107904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8109317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8110783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8112206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8113632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8115112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8116551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8118110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8119631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8121066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8122509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8123947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8125390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8126808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8128240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8129705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8131150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8132593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8134023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8135465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8136992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8138520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8139940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8141403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8142819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8144260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8145684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8147116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8148548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8152800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8156241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8157705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8159123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8160682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8162202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8163657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8165083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8166538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8167993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8169440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8170891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8172322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8173763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8175298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8176798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8178212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8179659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8181143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8182639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8184058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8185503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8186958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8188366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8189810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8191248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8192697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8194197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8195990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8197442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 14%] 2024-08-07T18:08:30.8198902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8200411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8201919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8203350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8204808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8206248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8207675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8209121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8210548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8211970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8213449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8214972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8216421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8217905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8219378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8220875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8222304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8223741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8225159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8226627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8228059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8229471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8230899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8232371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8233873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8235282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8236738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8238210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8239706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8241119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8242558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8243978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8245417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8246848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8248261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8249697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8251164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8252647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8254049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8255484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8256978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8258518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8259923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8261390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8262828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8264262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8265679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8267157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8268656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8270127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8271626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8273069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8274528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8275995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8277501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8278926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8280377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8281789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8283223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8284652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8286106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8287517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8288981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8290479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8291909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8293343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8294807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8296571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8298014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8299456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8300878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8302323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8303760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8305204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8306631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8308162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8309667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8311089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8312532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8314012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8315514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8316940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8318426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8319858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8321305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8322717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8324160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8325580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8327092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8328560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8329994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8331417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8332908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8334393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8335801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8337270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8338702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8340122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8341539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8342998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8344419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8345893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8347387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8348825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8350243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8351734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8353209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8354622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8356065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8357500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8358923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8360340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8361793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8363204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8364676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8366164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8367683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8369103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8370587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8372070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8373501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8374940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8376370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8377831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8379265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8380708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8382132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8383614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8385087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8386516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8387948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8389440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8391680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8393112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8394531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8396209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8397671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8399103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8400546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8401973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8403526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8405030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8406465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8407908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8409460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8410938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8412368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8413790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8415229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8416659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8418125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8419575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8420998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8422469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8423932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8425375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8426797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8428282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8429745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8431177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8432604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8434034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8435439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8436859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8438327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8439744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8441219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8442700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8444156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8445560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8447061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8448587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8450059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8451470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8452913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8454335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8455769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8457174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8458607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8460089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8461569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8462998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8464406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8465929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8467406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8468851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8470272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8471724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8473153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8474581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8475999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8477422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8478968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8480453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8481885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8483302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8484864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8486337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8487769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8489204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8490649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8492056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8493505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8494934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8496684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8498096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8499611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8501129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8502550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8503973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8505445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8506943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8508364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8509810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8511221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8512661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8514091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8515522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8516929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8518435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8519955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8521357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8522789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8524246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8525757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8527163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8528590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8530020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8531462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8532868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8534298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8535706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8537187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8538641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8540065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8541508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8542978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8544473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8545889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8547351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8548776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8550210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8551626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8553077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8554496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8555967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8557439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8558864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8560302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8561746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8563219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8564641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8566093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8567501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8568992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8570417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8571861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8573260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8574725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8576198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8577648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8579072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8580542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8582033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8583463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8584900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8586317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8587758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8589210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8590655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8592062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8593537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8595281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8596742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8598149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8599674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8601183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8602586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8604022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8605458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8606896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8608303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8609763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8611191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8612688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8614172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8615610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8617037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8618594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8620084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8621502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8622952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8624394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8625822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8627237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8628680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8630132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8631598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8633060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8634545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8635972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8637446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8638910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8640366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8641794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8643213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8644642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8646063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8647510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8648918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8650409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8651932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8653382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8654788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8656260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8657740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8659177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8660602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8662023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8663459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8664876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8666301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8667717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8669182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8670674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8672093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8673492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8674968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8676447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8677860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8679263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8680729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8682162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8683572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8685013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8686446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8687945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8689431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8690885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8692305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8693794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8695502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8696956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8698373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8699839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8701248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8702663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8704108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8705531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8706951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8708438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8709957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8711378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8712805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8714274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8715784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8717210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8718685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8720120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8721571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8723011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8724431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8725866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8727336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8728838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8730271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8731706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8733168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8734658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8736059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8737492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8738914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8740381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8741801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8743216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8744652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8746119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8747603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8749011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8750471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8751939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8753415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8754823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8756269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8757698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8759124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8760561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8762011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8763432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8764899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8766385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8767806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8769256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8770727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8772197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8773611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8775061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8776459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8777887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8779301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8780770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8782166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8783626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8785125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8786564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8787994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8789455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8790984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8792417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8793859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8795527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8797001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8798444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8799888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8801328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8802848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8804345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8805758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8807193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8808686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8810204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8811630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8813073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8814559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8816003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8817417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8818909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8820335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8821838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8823289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8824721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8826140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8827615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8829106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8830515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8831978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8833419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8834849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8836265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8837705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8839127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8840589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8842062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8843500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8844917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8846363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8847839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8849255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8850692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8852118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8853543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8854965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8856424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8857826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 15%] 2024-08-07T18:08:30.8859302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8860770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8862225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8863631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8865115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8866590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8868065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8869510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8870953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8872400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8873809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8875240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8876647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8878125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8879600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8881031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8882445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8883883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8885354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8886808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8888234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8889658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8891111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8892524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8893964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8895617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8897093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8898585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8900101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8901531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8902980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8904465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8905965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8907380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8908810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8910236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8911662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8913107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8914534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8915949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8917403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8918935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8920356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8921804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8923263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8924749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8926166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8927580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8929004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8930424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8931892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8933314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8934741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8936203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8937701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8939106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8940536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8942020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8943510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8944900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8946305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8947737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8949151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8950570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8952008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8953439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8954942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8956478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8957885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8959322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8960792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8962291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8963699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8965146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8966579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8967988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8969420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8970851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8972331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8973787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8975283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8976699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8978140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8979593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8981066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8982505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8983980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8985390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8986795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8988232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8989661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8991082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8992562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8994063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8995775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8997235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.8998762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9000288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9001729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9003190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9004626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9006083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9007528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9008955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9010399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9011883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9013418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9014841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9016276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9017794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9019310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9020726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9022168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9023623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9025083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9026500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9027948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9029385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9030866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9032371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9033813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9035267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9036759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9038257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9039681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9041137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9042600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9044052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9045481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9046932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9048366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9049855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9051328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9052786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9054242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9055701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9057191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9058615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9060070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9061500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9062967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9064393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9065849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9067269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9068753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9070232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9071670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9073127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9074595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9076097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9077527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9078960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9080389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9081833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9083285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9084724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9086137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9087646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9089131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9090562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9091972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9093461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9094969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9096598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9098030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9099472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9100928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9102347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9103814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9105251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9106796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9108291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9109739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9111171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9112689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9114219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9115643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9117089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9118572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9120014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9121435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9122917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9124358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9125842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9127314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9128766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9130201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9131688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9133177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9134626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9136066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9137497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9138949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9140387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9141855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9143310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9144810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9146302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9147767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9149184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9150678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9152162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9153637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9155061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9156497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9157947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9159387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9160828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9162256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9163777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9165265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9166707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9168185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9169699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9171191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9172618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9174060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9175520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9176958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9178380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9179829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9181262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9182763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9184257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9185705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9187121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9188623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9190092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9191519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9192947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9194420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9196144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9197597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9199029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9200460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9201978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9203471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9204946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9206379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9207875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9209371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9210816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9212255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9213723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9215155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9216608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9218088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9219537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9221007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9222478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9223944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9225352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9226824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9228303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9229753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9231171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9232615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9234061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9235513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9236936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9238374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9239845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9241333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9242765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9244204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9245693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9247230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9248659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9250081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9251530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9252964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9254421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9255845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9257288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9258751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9260244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9261658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9263080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9264597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9266060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9267483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9268903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9270358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9271775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9273207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9274668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9276142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9277598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9279085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9280543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9281992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9283456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9284956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9286400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9287831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9289276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9290700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9292141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9293574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9295289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9296723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9298263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9299761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9301182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9302597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9304074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9305604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9307014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9308454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9309874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9311321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9312737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9314200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9315622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9317114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9318631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9320070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9321497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9322994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9324471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9325896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9327340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9328762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9330184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9331601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9333046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9334491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9335978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9337443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9338882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9340310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9341785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9343249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9344707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9346140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9347549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9348979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9350409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9351852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9353257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9354756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9356815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9358259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9359671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9361181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9362657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9364096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9365548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9366955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9368390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9369817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9371246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9372655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9374135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9375635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9377056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9378463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9379950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9381427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9382856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9384272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9385742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9387175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9388590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9390038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9391462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9392991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9394478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9396245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9397685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9399241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9400732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9402165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9403594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9405074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9406506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9407941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9409374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9410821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9412314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9413800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9415249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9416696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9418213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9419677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9421123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9422564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9423994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9425429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9426873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9428316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9429726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9431207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9432675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9434118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9435537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9437018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9438491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9439936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9441351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9442783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9444196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9445657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9447072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9448504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9449970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9451449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9452881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9454299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9455813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9457295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9458731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9460154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9461599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9463026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9464458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9465910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9467340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9468853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9470330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9471742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9473164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9474662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9476147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9477574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9478997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9480443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9481845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9483276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9484699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9486160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9487605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9489081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9490505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9491929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9493404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9494862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9497142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9498608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9500044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9501450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9502880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9504300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9505721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9507263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9508789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9510216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9511645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9513120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9514606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9516038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9517462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9518946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 16%] 2024-08-07T18:08:30.9520376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9521824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9523244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9524685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9526165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9527684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9529111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9530557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9532036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9533544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9534954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9536397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9537845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9539264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9540687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9542106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9543552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9545030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9546545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9547970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9549412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9550934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9552417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9553824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9555250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9556719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9558134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9559569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9561008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9562465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9563927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9565418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9566870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9568326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9569786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9571271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9572701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9574132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9575547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9576981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9578425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9579859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9581288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9582745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9584232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9585661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9587118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9588579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9590072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9591491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9592922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9594345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9596081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9597546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9598962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9600392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9601916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9603430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9604842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9606267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9607760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9609265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9610653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9612076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9613495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9614930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9616331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9617763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9619240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9620713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9622193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9623608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9625045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9626516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9628069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9629476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9630917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9632354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9633785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9635200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9636650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9638098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9639550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9641036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9642459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9643890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9645345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9646830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9648254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9649701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9651112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9652540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9653958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9655412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9656838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9658315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9659790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9661216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9662636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9664089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9665583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9667019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9668448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9669864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9671306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9672730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9674161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9675576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9677027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9678494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9679953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9681372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9682783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9684272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9685743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9687198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9688625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9690065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9691470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9692905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9694326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9696020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9697585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9699089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9700507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9701932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9703438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9704919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9706363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9707822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9709257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9710673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9712116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9713536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9714951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9716400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9717948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9719374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9720778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9722257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9723722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9725159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9726570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9728022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9729439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9730884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9732300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9733726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9735196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9736695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9738128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9739563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9741033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9742507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9743933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9745355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9746789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9748228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9749650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9751068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9752508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9753979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9755461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9756877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9758334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9759815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9761350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9762760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9764183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9765622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9767020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9768510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9769954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9771394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9772847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9774330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9775757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9777207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9778685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9780168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9781575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9783020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9784448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9785857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9787350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9788779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9790203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9791660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9793205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9794630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9796377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9797939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9799456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9800873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9802297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9803711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9805126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9806564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9807988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9809424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9810909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9812428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9813840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9815269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9816720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9818269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9819669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9821092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9822514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9823925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9825345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9826759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9828216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9829681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9831171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9832580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9834033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9835529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9837024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9838464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9839917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9841361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9842794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9844220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9845651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9847110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9848604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9850159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9851582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9853024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9854481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9855971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9857390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9858871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9860296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9861727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9863151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9864612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9866034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9867498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9869026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9870466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9871898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9873364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9874874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9876307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9877752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9879206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9880654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9882092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9883540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9884974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9886442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9887941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9889360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9890794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9892262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9893780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9895488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9896965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9898417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9899864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9901283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9902731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9904160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9905727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9907223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9908661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9910108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9911607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9913121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9914536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9915988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9917430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9918922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9920354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9921803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9923227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9924702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9926169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9927620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9929074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9930536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9932029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9933447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9934895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9936315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9937758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9939217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9940680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9942104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9943588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9945079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9946543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9947958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9949473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9950965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9952402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9953840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9955277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9956763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9958200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9959664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9961085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9962579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9964057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9965491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9966903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9968391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9969907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9971320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9972760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9974191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9975644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9977059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9978507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9979955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9981455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9982926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9984370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9985798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9987298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9988781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9990219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9991640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9993075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9994510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9996261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9997724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:30.9999175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0000710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0002211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0003656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0005088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0006597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0008092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0009552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0010981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0012428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0013848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0015278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0016733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0018195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0019705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0021195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0022656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0024069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0025548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0027025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0028459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0029885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0031319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0032749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0034192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0035608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0037026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0038511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0040015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0041445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0042853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0044360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0045861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0047297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0048715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0050198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0051643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0053085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0054514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0055951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0057451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0058933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0060398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0061860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0063355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0064825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0066255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0067680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0069183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0070605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0072071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0073512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0074970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0076430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0077907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0079361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0080794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0082274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0083733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0085170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0086601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0088037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0089466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0090911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0092347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0093778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0095558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0097091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0098521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0099951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0101445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0102944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0104402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0105808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0107254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0108677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0110152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0111573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0113006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0114479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0116004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0117418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0118877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0120404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0121895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0123328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0124747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0126204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0127644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0129079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0130536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0131980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0133445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0134928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0136346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0137781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0139251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0140741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0142165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0143588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0145035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0146447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0147884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0149320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0150795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0152203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0153731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0155223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0156672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0158093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0159582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0161082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0162516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0163963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0165384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0166824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0168258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0169689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0171112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0172597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0174077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0175504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0176916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0178405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 17%] 2024-08-07T18:08:31.0179894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0181331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0182758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0184183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0185627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0187043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0188488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0189923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0191435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0192902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0194332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0196056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0197612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0199126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0200588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0202010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0203430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0204849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0206260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0207702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0209114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0210629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0212115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0213545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0214964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0216433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0217893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0219378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0220839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0222273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0223683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0225115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0226573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0227988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0229466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0230973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0232420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0233914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0235396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0236864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0238302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0239714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0241171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0242598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0244013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0245440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0246847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0248324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0249803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0251259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0252673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0254159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0255648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0257075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0258495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0259940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0261385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0262820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0264243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0265665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0267158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0268628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0270071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0271506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0272989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0274442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0275866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0277292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0278738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0280140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0281586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0283012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0284445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0285889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0287360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0288803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0290263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0291740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0293206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0294649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0296377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0297832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0299238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0300692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0302137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0303564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0305073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0306597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0308023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0309427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0310950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0313125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0314572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0315985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0317418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0318879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0320381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0321817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0323244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0324734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0326236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0327645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0329052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0330551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0332050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0333477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0334890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0336327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0337741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0339168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0340580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0342029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0343479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0344946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0346347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0347775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0349240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0350687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0352129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0353547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0354989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0356390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0357810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0359234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0360676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0362149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0363641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0365066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0366521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0367982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0369509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0370939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0372389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0373830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0375237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0376672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0378098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0379522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0380972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0382482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0383907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0385331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0386784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0388275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0389692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0391098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0392548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0393960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0395705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0397143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0398582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0400088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0401633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0403056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0404496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0405983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0407502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0408912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0410351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0411792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0413212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0414636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0416060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0417503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0419020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0420508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0422004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0423444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0424922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0426402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0427813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0429257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0430683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0432110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0433540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0435018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0436466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0437919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0439409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0440844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0442381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0443834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0445362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0446769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0448201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0449608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0451031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0452475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0453903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0455323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0456778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0458264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0459685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0461112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0462591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0464092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0465517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0466947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0468374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0469819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0471250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0472715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0474138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0475620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0477127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0478550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0479988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0481449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0482968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0484374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0485799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0487225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0488671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0490080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0491516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0492961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0494433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0496176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0497605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0499040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0500530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0502022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0503460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0504900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0506331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0507760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0509169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0510608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0512036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0513522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0515003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0516419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0517846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0519330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0520805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0522230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0523668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0525073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0526498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0527912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0529350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0530751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0532180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0533643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0535139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0536536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0537943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0539611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0541107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0542547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0543971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0545416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0546843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0548276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0549688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0551144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0552622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0554168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0555565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0556978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0558461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0559911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0561336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0562784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0564228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0565631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0567062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0568479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0569917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0571363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0572862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0574278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0575714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0577188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0578654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0580090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0581524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0582981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0584392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0585825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0587247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0588659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0590101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0591583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0593035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0594453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0596191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0597693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0599129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0600542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0601971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0603402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0604838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0606238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0613601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0615306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0616867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0618346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0619811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0622629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0625334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0628001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0630660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0633324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0635976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0638638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0642801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0645545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0648289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0650995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0653637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0656279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0658987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0662938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0665637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0668368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0671065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0673741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0676388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0679058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0683248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0686074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0688798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0691472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0694169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0697255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0701231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0704087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0706768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0709428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0712065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0714696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0717375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0721139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0723948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0726691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0729356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0732001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0734692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0737418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0740092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0742749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0745403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0748060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0750739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0753408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0756064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0758778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0761520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0764184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0766813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0769510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0772221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0774869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0777531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0780187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0782838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0785494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0788144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0790807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0793528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0796540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0799179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0801825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0804622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0807352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0809993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0812691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0815352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0818019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0820719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0823390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0826070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0828725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0831484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0834191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0836853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0839519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0842204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0844903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0847568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0850218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0852858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0855503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0858185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0860828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0863456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0866142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0868879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0871520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0874158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0876893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0879609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0882241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0884887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0887567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0890230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0892905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0895817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0898455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0901236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0903938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0906591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0909231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0911947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0914634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0917266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0919993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0922653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0925309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0927945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0930606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0933248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0935952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0938657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0941331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0944034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0946766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0949453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0952112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0954795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0957456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0960105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0962775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0965416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0968101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0970785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0973520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0976187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0978831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0981518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0984224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0986880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0989530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0992179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0994836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.0997791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1000506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1003162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1005879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1008620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1011290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1013944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1016668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1019498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1022139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1024788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1027470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1030130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1032773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1035414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1038064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1040772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1043468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 18%] 2024-08-07T18:08:31.1046103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1048752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1051453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1054160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1056806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1059450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1062109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1064749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1067382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1070014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1072678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1075343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1078023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1080741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1083392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1086040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1088720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1091402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1094049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1096937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1099561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1102194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1104832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1107481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1110109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1112832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1115588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1118254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1120910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1123741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1126770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1129429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1132091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1134749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1137403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1140066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1142721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1145400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1148132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1150842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1153474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1156124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1158831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1161524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1164167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1166820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1169489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1172138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1174779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1177430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1180091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1182774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1185465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1188099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1190758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1193452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1196408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1199088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1201752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1204409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1207243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1209910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1212585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1215239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1218237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1221002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1223735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1226371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1229344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1232340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1235001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1237642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1240673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1243688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1246391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1249271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1251912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1254961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1257951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1260868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1263510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1266552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1269910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1273050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1275714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1278377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1281764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1285027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1288393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1291071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1294421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1298495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1301606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1305651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1308849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1312015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1314625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1317265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1319974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1322614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1325251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1327916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1330560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1333199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1335995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1338727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1341387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1344041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1346726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1349431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1352095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1354745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1357395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1360039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1362712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1365347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1367980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1370683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1373388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1376019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1378646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1381310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1384030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1386672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1389305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1391939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1394575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1397549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1400190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1402850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1405601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1408321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1410958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1413580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1416294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1419051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1421709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1424373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1427015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1429614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1432234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1434870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1437520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1440240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1442971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1445595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1448245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1450996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1453665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1456301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1459005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1461633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1464274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1466923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1469591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1472240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1474921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1477666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1480320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1482959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1485842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1488556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1491246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1494011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1497125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1499745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1502397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1505051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1507673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1510380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1513098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1515733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1518392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1521110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1523853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1526513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1529161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1531833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1534543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1537213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1539879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1542552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1545230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1547925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1550658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1553340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1556000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1558712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1561418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1564060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1566736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1569451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1572122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1574787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1577480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1580135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1582856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1585577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1588262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1591336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1596351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1600597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1604374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1608038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1612895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1616540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1620407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1624308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1627935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1632371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1636077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1639226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1641881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1644609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1647344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1649983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1652637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1655296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1657981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1660649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1663294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1665949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1668669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1671378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1674019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1676669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1679403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1682097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1684759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1687418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1690089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1692759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1695737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1698419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1701071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1703790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1706502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1709147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1711828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1714530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1717267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1719977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1722651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1725304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1727930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1730593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1733261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1735940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1738667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1741440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1744128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1746795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1749485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1752206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1754910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1757581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1760244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1762898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1765575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1768223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1770854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1773583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1776305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1778959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1781580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1784281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1786995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1789645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1792334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1794992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1797921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1800570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1803265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1805930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1808682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1811477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1814138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1816784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1819590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1822337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1824995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1827663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1830347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1832971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1835661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1838351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1841017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1843724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1846440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1849101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1851743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1854449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1857145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1859802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1862475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1865173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1867800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1870522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1873207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1875855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1878551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1882000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1884662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1887320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1890022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1892751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1895707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1898367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1900985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1903642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1906303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1908948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1911587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1914310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1917024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1919743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1922397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1925115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1927852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1930508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1933541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1936235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1938926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1941617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1944304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1947028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1949774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1952479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1955144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1957809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1960551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1963262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1965924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1968557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1971234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1973891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1976537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1979227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1981899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1984595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1987296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1989959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1992642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1995941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.1998725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2001376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2002801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2004212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2005637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2007047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2008485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2009908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2011446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2012925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2014360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2015741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2017202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2018707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2020161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2021558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2022962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2024379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2025789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2027206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2028611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2030094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2031564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2032983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2034384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2035855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2037326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2038742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2040161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2041602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2043031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2044436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2045880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2047276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2048772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2050217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2051638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2053040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2054509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2055978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2057389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2058805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2060280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2061677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2063098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2064519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2065938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2067393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2068871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2070313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2071733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2073199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2074663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2076094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2077513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2078959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2080374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2081799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2083216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2084617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2086031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2087484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2088993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2090392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2091814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2093268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2094752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2096598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2098047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2099481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2100913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2102310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2103729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2105134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2106638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2108137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2109568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2111006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2112477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2113957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2115362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2116791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2118201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2119675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2121082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2122514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2123924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2125368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2126834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2128235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2129688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2131139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2132847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2134300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2135745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2137146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2138568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2140008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2141452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2142848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2144331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2145802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2147229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2148650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2150171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2151643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2153055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2154472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2155870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 19%] 2024-08-07T18:08:31.2157357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2158772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2160212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2161609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2163079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2164545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2165951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2167377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2168899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2170412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2171820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2173250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2174669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2176104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2177511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2178948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2180377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2181860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2183315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2184751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2186159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2187621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2189089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2190493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2191934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2193363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2194785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2196650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2198102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2199551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2201056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2202546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2203983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2205403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2206883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2208355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2209784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2211233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2212645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2214076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2215497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2216946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2218356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2219911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2221378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2222815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2224215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2225682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2227143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2228555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2230000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2231413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2232850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2234265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2235694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2237098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2238568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2240056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2241477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2242876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2244350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2245817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2247235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2248639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2250077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2251517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2252914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2254342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2255747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2257218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2258655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2260098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2261498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2262967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2264407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2265820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2267222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2268638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2270076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2271475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2272913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2274339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2275817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2277275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2278716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2280163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2281632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2283099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2284528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2285949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2287380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2288794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2290225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2291660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2293059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2294474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2296527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2298074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2299472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2300914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2302380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2303941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2305339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2306767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2308187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2309604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2311046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2312455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2313887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2315361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2316838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2318244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2319724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2321213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2322687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2324096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2325525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2326940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2328355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2329764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2331196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2332628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2334070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2335544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2336951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2338382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2339827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2341317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2342721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2344151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2345545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2346958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2348370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2349817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2351232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2352674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2354150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2355572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2356991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2358438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2359923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2361334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2362745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2364137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2365557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2366967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2368383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2369783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2371229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2372714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2374112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2375530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2376987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2378481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2379894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2381323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2382740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2384175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2385581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2387012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2388421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2389922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2391376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2392789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2394214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2396145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2397661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2399066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2400527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2401958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2403376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2404783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2406224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2407646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2409125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2410616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2412033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2413469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2414929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2416401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2417814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2419300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2420741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2422164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2423576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2425016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2426419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2427886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2429341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2430798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2432193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2433633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2435103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2436514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2437926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2439330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2440783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2442194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2443618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2445041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2446516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2447982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2449408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2450838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2452321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2453824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2455223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2456653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2458076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2459521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2460942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2462371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2463783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2465251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2466688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2468110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2469565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2471070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2472515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2473913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2475338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2476753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2478171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2479572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2481028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2482450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2483867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2485324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2486846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2488261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2489685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2491156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2492612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2494048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2495841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2497297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2498701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2500133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2501537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2502942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2504435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2505936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2507324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2508736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2510202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2511710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2513100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2514503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2515927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2517331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2518785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2520197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2521635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2523092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2524554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2525954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2527380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2528838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2530302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2531703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2533105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2534532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2535914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2537326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2538736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2540174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2541609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2543073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2544482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2545907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2547354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2548806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2550232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2551651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2553071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2554472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2555900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2557318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2558733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2560202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2561687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2563110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2564529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2565978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2567449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2568850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2570278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2571688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2573091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2574529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2575929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2577350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2578802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2580300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2581703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2583116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2584574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2586065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2587457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2588886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2590330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2591757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2593175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2594589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2596406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2597944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2599434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2600870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2602297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2603808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2605287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2606683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2608113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2609534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2610972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2612376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2613793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2615233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2616632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2618096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2619603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2621070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2622469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2623962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2625448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2626884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2628290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2629719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2631161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2632588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2634012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2635416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2636890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2638348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2639759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2641183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2642660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2644116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2645535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2646943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2648374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2649784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2651227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2652635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2654040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2655512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2656963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2658376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2659782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2661286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2662728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2664148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2665562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2666998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2668389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2669808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2671243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2672644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2674089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2675537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2676967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2678374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2679831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2681292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2682710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2684123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2685541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2686936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2688367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2689783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2691223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2692663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2694131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2695952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2697382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2698906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2700400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2701858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2703269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2704704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2706108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2707541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2708939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2710360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2711853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2713333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2714750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2716151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2717616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2719118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2720542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2721964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2723404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2724817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2726236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2727644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2729079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2730543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2732029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2733436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2734846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2736322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2737779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2739205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2740611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2742062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2743459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2744870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2746284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2747718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2749162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2750627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2752050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2753464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2754924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2756375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2757805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2759219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2760643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2762045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2763514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2764930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2766346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2767742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2769261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2770766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2772180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2773580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2775029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2776499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2777891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2779305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2780730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2782159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2783554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2784972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2786420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2787891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2789332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2790749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2792164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2793624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2795452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2796915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2798356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2799776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2801208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2802619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2804056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2805469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2806965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2808446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2809876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2811294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 20%] 2024-08-07T18:08:31.2812753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2814239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2815647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2817089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2818493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2819956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2821401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2822845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2824239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2825711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2827163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2828598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2829996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2831480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2832941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2834355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2835770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2837180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2838607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2840025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2841472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2842914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2844369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2845822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2847234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2848625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2850098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2851575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2852969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2854383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2855796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2857223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2858613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2860035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2861467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2862944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2864383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2865804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2867206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2868680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2870122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2871564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2872973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2874437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2875855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2877258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2878686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2880093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2881561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2883002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2884430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2885837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2887248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2888692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2890775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2892205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2893605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2895312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2896754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2898194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2899590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2901006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2902529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2904094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2905487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2906916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2908387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2909906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2911303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2912737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2914157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2915559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2916972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2918375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2919838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2921307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2922794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2924192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2925614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2927064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2928526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2929921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2931351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2932786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2934173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2935585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2937001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2938426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2939858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2941327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2942783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2944216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2945654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2947142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2948536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2949964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2951360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2952777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2954194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2955607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2957014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2958450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2959924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2961330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2962760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2964197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2965736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2967140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2968555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2969956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2971389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2972831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2974239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2975670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2977120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2978603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2980006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2981415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2982895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2984376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2985761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2987173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2988577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2990006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2991396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2992819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2994245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2996004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2997546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.2999018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3000455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3001884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3003391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3004872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3006371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3007801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3009228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3010641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3012064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3013530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3014934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3016483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3017949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3019428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3020834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3022354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3023835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3025273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3026677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3028101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3029514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3030955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3032361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3033783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3035260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3036737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3038157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3039572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3041059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3042534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3043975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3045405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3046832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3048251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3049679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3051099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3052537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3053994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3055468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3056893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3058311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3059795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3061247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3062677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3064095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3065533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3066930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3068400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3069828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3071262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3072712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3074173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3075610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3077031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3078496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3079955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3081393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3082829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3084259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3085658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3087082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3088500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3089906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3091343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3092842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3094262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3095934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3097443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3098920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3100351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3101758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3103203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3104623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3106065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3107473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3108907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3110389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3111901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3113338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3114755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3116239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3117717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3119176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3120597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3122037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3123478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3124901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3126305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3127740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3129214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3130689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3132094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3133564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3135026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3136483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3137953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3139372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3140819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3142222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3143669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3145095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3146544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3147996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3149482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3150902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3152344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3153814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3155280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3156707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3158113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3159543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3160948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3162381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3163833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3165254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3166698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3168188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3169606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3171022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3172469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3173973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3175387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3176781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3178207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3179629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3181065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3182471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3183919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3185377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3186867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3188275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3189697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3191145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3192642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3194043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3195711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3197161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3198579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3199997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3201403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3202835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3204255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3205801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3207276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3208709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3210134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3211614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3213094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3214543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3215972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3217391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3218857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3220290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3221744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3223162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3224642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3226110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3227539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3228938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3230407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3231868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3233321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3234730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3236157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3237570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3238995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3240423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3241827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3243322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3244789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3246201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3247601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3249077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3250541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3251965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3253393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3254830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3256247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3257652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3259081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3260483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3270412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3272059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3273502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3274921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3276424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3277877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3279298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3280714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3282168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3283566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3284989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3286412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3287843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3289288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3290771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3292213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3293637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3295437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3296941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3298376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3299802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3301241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3302671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3304100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3305515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3306935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3308397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3309894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3311312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3312738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3314206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3315670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3317101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3318500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3319978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3321398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3322859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3324266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3325689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3327154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3328644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3330054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3331481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3332961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3334434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3335857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3337269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3338701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3340116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3341534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3342964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3344402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3345862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3347346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3348753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3350176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3351636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3353109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3354523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3355935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3357365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3358766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3360188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3361602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3363061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3364506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3365973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3367387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3368879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3370293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3371766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3373240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3374652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3376063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3377464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3378883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3380304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3381718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3383129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3384602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3386064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3387473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3388872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3390352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3391821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3393241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3394667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3396398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3397853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3399261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3400697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3402110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3403647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3405125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3406553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3407963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3409453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3410908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3412311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3413757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3415174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3416586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3417992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3419473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3420887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3422361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3423814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3425247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3426661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3428117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3429569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3431001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3432448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3433853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3435276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3436694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3438131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3439529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3440996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3442455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3443883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3445272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3446733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3448193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3449619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3451017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3452430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3453900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3455306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3456717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3458116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3459587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3461048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3462477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3463878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3465355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3466849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3468259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3469663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3471099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3472529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3473925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 21%] 2024-08-07T18:08:31.3475351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3476753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3478222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3479660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3481075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3482502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3483979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3485464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3486881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3488284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3489721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3491113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3492537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3493963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3495648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3497146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3498613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3500042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3501449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3502937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3504446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3505928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3507337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3508754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3510156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3511567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3512998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3514389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3515798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3517244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3518721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3520141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3521561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3523033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3524510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3525898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3527309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3528723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3530160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3531564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3532996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3534426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3535889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3537358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3538758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3540191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3541701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3543188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3544597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3546020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3547430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3548836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3550228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3551662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3553094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3554528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3555990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3557403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3558832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3560285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3561748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3563182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3564623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3566022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3567440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3568853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3570299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3571702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3573170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3574654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3576070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3577556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3579007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3580478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3581879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3583315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3584712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3586128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3587533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3588947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3590349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3591872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3593351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3594747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3596422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3597911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3599407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3600792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3602205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3603637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3605064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3606452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3607871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3609278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3610767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3612225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3613642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3615052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3616494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3617943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3619364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3620786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3622192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3623618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3625009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3626432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3627837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3629285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3630737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3632155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3633599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3635039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3636509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3637918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3639352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3640753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3642176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3643599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3645033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3646435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3647852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3649298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3650782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3652168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3653567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3655042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3656510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3657933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3659343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3660773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3662210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3663630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3665037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3666457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3667906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3669416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3670814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3672243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3673745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3675215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3676630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3678037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3679468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3680857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3682282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3683688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3685099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3686518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3687971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3689375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3690796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3692239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3693682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3695367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3696800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3698210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3699597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3701019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3702443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3703848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3705401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3706914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3708323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3709735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3711186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3712670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3714095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3715484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3716911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3718305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3719761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3721157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3722581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3724022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3725507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3726898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3728305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3729749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3731225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3732622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3734019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3735446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3736849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3738294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3739700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3741128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3742607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3744078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3745478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3746901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3748352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3749819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3751212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3752631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3754055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3755444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3756851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3758252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3759679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3761111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3762591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3763996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3765410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3766842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3768325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3769718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3771121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3772565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3773960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3775383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3776794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3778208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3779596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3781058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3782544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3783951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3785347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3786808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3788256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3789657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3791048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3792474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3793890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3795549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3796983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3798374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3799884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3801341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3802774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3804178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3805696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3807152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3808565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3809975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3811386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3812818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3814225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3815660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3817069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3818524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3820015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3821443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3822869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3824316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3825752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3827168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3828572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3830057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3831432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3832852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3834284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3835678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3837126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3838594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3840041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3841462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3842955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3844427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3845865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3847272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3848713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3850129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3851554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3853008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3854422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3855892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3858063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3859488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3860885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3862368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3863865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3865280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3866685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3868123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3869543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3870965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3872374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3873819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3875305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3876761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3878195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3879616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3881106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3882570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3884015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3885431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3886882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3888293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3889728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3891147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3892566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3894047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3895786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3897249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3898696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3900205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3901691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3903127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3904561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3906053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3907467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3908901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3910325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3911751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3913155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3914645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3916174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3917577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3919052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3920526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3922019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3923424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3924861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3926273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3927706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3929106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3930531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3931933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3933418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3934890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3936292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3937722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3939184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3940653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3942057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3943511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3944943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3946369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3947785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3949228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3950653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3952133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3953618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3955043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3956489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3957945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3959426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3960850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3962290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3963719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3965142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3966567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3968013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3969461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3970974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3972471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3973920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3975343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3976796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3978286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3979715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3981149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3982562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3984027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3985454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3986887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3988299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3989793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3991271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3992706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3994149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3995922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3997445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.3998845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4000273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4001689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4003122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4004551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4005981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4007392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4008886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4010352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4011784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4013193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4014694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4016145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4017547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4018986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4020454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4021877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4023283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4024747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4026169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4027633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4029090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4030517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4031929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4033386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4034852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4036262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4037695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4039101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4040519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4041921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4043358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4044775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4046238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4047703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4049150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4050556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4052030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4053511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4054986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4056398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4057814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4059251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4060677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4062107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4063525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4065013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4066474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4067891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4069286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4070756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4072228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4073645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4075055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4076475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4077915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4079319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4080752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4082169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4083670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4085114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4086540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4087952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4089431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4090889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4092314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4093736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4095431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4096863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4098270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4099699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4101111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4102595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4104127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4105563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4106977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4108398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4109856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4111350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4112762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4114192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4115591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4117001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4118439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4119882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4121311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4122773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4124284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4125686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4127113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4128588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 22%] 2024-08-07T18:08:31.4130085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4131489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4132919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4134347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4135777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4137177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4138579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4139998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4141453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4142920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4144344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4145772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4147226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4148691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4150089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4151523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4152988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4154451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4155839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4157264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4158702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4160150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4161629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4163048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4164516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4165964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4167438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4168842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4170274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4171671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4173090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4174529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4175967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4177362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4178802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4180276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4181690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4183108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4184579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4186061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4187464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4188883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4190286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4191709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4193119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4194564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4196207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4197707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4199231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4200633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4202053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4203506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4205024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4206409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4207824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4209233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4210658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4212045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4213464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4214896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4216347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4217803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4219238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4220671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4222128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4223590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4225016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4226451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4227877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4229307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4230717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4232158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4233576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4235065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4236523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4237932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4239358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4240796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4242264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4243669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4245120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4246515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4247943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4249349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4250784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4252184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4253599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4255078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4256558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4257970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4259373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4260847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4262316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4263731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4265153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4266585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4268001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4269478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4270892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4272313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4273775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4275258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4276652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4278059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4279530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4280971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4282391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4283802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4285248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4286636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4288053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4289472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4290907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4292347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4293815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4295502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4296950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4298445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4299919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4301356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4302776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4304226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4305629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4307055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4308468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4309876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4311326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4312814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4314237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4315652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4317095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4318550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4320017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4321419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4322833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4324239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4325673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4327069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4328478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4329926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4331405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4332796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4334221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4335672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4337149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4338556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4339955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4341371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4342764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4344184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4345576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4347001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4348448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4349899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4351294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4352706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4354171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4355635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4357022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4358429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4359869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4361267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4362689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4364130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4365571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4367011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4368490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4369906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4371338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4372780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4374282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4375682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4377089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4378504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4379896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4381319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4382735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4384171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4385614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4387090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4388505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4389914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4391353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4392832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4394265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4395956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4397376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4398800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4400238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4401651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4403080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4404518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4406166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4407643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4409066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4410470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4411970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4413427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4414870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4416280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4417699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4419110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4420558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4421986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4423396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4424882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4426335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4427755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4429162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4430611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4432054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4433479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4434919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4436331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4437725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4439132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4440566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4441953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4443411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4444880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4446302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4447682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4449129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4450579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4452004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4453398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4454842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4456242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4457650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4459066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4460459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4461933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4463394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4464841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4466240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4467715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4469178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4470596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4472008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4473448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4474885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4476306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4477728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4479126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4480596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4482052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4483469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4484893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4486367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4487808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4489223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4490629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4492063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4493456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4494898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4496577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4497997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4499479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4500946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4502378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4503786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4505269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4506731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4508149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4509558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4510977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4512378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4513792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4515212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4516606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4518053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4519543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4520977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4522363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4523819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4525295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4526717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4528107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4529523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4530930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4532357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4533756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4535191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4536593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4538052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4539518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4540916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4542341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4543807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4545266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4546690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4548109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4549511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4550915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4552304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4553722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4555129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4556571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4558036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4559438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4560867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4562303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4563776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4565185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4566620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4568017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4569480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4570899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4572332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4573746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4575207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4576661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4578073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4579496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4580935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4582395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4583802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4585209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4586605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4588018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4589421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4590834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4592230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4593705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4595469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4596880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4598290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4599775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4601259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4602639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4604065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4605478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4606901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4608289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4609704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4611106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4612589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4614062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4615508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4616924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4618370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4619880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4621269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4622688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4624105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4625518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4626909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4628325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4629746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4631189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4632625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4634112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4635539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4636932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4638397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4639852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4641283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4642682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4644121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4645530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4646973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4648363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4649784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4651226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4652702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4654097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4655489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4656956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4658410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4659818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4661216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4662641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4664058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4665477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4666879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4668300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4669747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4671199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4672597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4674018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4675489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4676928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4678337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4679750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4681176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4682560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4683997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4685401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4686810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4688235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4689684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4691083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4692508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4693964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4695729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4697160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4698565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4699974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4701354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4702773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4704197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4705604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4707067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4708559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4709968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4711378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4712831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4714317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4715740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4717126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4718549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4719984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4721406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4722794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4724220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4725659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4727131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4728514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4729953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4731350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4732818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4734285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4735679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4737096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4738498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4739899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4741287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4742709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4744118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4745565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4747001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4748413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4749816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4751244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4752696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4754099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4755516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4756888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4758283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4759675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4761097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4762474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4763924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4765380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4766797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4768169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4769615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4771063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4772474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4773919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4775325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4776751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4778165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 23%] 2024-08-07T18:08:31.4779586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4780988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4782518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4783994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4785412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4786806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4788264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4789708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4791092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4792503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4793915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4795633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4797041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4798459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4799859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4801360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4802814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4804240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4805640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4807122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4809174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4810560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4811979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4813392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4814815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4816211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4817639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4819050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4820501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4821954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4823428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4824834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4826238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4827663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4829130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4830554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4831923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4833330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4834738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4836164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4837551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4838947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4840397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4841876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4843262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4844672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4846112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4847584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4848965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4850352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4851771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4853166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4854580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4855972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4857372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4858803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4860243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4861621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4863029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4864478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4865921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4867299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4868735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4870166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4871539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4872938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4874344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4875772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4877205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4878676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4880085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4881513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4882976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4884463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4885865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4887279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4888693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4890095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4891519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4892926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4894336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4896130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4897710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4899120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4900519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4901906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4903393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4904891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4906298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4907689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4909090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4910511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4911898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4913303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4914700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4916189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4917635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4919039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4920472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4921944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4923374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4924784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4926175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4927569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4928960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4930345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4931767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4933206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4934671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4936108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4937527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4938927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4940375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4941804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4943265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4944681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4946072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4947471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4948864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4950289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4951674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4953123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4954567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4955997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4957385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4958838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4960273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4961679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4963053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4964434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4965848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4967243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4968645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4970032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4971482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4972923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4974319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4975710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4977177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4978628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4980022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4981408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4982828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4984228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4985625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4987033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4988433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4989844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4991268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4992721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4994108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4995812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4997347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.4998818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5000210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5001635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5003015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5004394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5005819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5007221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5008619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5010059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5011548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5012948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5014338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5015776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5017235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5018629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5020070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5021471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5022862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5024280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5025685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5027093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5028518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5029975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5031349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5032742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5034166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5035647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5037017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5038415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5039806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5041203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5042593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5043975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5045416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5046817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5048258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5049710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5051125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5052529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5053981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5055437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5056854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5058258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5059643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5061048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5062445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5063851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5065251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5066701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5068144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5069558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5070934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5072375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5073805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5075243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5076625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5078031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5079457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5080871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5082315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5083717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5085221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5086690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5088125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5089526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5091001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5092461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5093879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5095669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5097117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5098521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5099911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5101328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5102739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5104254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5105735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5107160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5108569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5110057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5111516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5112929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5114337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5115800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5117194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5118592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5120066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5121492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5122973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5124429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5125878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5127296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5128758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5130211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5131634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5133046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5134461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5135868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5137297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5138713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5140110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5141578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5143032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5144464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5145853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5147318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5148775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5150196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5151584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5153002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5154411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5155845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5157244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5158648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5160120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5161582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5162999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5164420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5165883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5167336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5168787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5170183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5171610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5173021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5174434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5175830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5177252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5178655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5180091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5181558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5182963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5184401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5185845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5187332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5188745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5190184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5191587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5193005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5194432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5196188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5197602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5199094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5200590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5201992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5203404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5204875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5206371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5207780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5209199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5210602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5212050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5213461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5214901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5216298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5217767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5219230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5220669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5222091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5223548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5225051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5226452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5227881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5229298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5230735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5232138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5233565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5234988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5236461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5237906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5239308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5240737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5242201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5243662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5245084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5246519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5247940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5249352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5250752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5252181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5253594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5255075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5256526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5257961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5259378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5260842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5262313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5263719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5265181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5266580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5267997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5269394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5270816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5272203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5273675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5275151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5276587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5277976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5279410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5280884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5282286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5283694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5285120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5286552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5287967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5289388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5290794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5292273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5293787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5295534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5296965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5298487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5299972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5301371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5302794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5304202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5305652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5307044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5308465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5309875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5311301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5312751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5314236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5315662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5317086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5318516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5320008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5321438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5322854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5324263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5325681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5327103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5328513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5329921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5331360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5332852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5334258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5335689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5337130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5338578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5339996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5341382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5342785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5344177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5345621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5347015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5348418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5349857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5351329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5352719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5354130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5355593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5357067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5358457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5359856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5361281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5362689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5364098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5365521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5366944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5368394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5369858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5371264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5372677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5374119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5375587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5376969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5378375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5379810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5381194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5382600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5384005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5385446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5386872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5388331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5389738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5391165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5392594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5394061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5395782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5397254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5398653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5400056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5401483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5402900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5404316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5405847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5407343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5408750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5410151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5411595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5413082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5414484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5415907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5417335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5418742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5420211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5421612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5423032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5424430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5425918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5427357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 24%] 2024-08-07T18:08:31.5428758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5430160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5431631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5433069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5434483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5435904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5437314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5438718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5440114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5441531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5442926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5444372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5445820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5447231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5448632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5450077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5451508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5452920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5454322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5455759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5457143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5458547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5459983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5461372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5462825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5464283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5465720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5467112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5468619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5470104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5471531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5472922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5474352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5475756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5477151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5478556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5479944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5481400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5482854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5484281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5485675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5487138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5488594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5489993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5491391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5492824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5494254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5495949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5497387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5498800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5500325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5501791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5503218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5504624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5506122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5507571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5508990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5510385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5511807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5513192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5514610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5516020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5517457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5518893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5520374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5521805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5523210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5524673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5526122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5527544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5528953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5530364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5531765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5533192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5534619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5536019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5537438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5538892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5540375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5541767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5543185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5544662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5546131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5547520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5548933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5550338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5551764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5553153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5554583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5555981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5557430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5558888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5560282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5561696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5563144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5564606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5565993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5567419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5568827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5570226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5571616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5573044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5574506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5575938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5577398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5578797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5580212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5581634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5583159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5584571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5585992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5587383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5588785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5590176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5591593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5592976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5594468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5596258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5597686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5599093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5600587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5602084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5603489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5604924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5606330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5607749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5609163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5610575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5611975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5613446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5614956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0004s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5616344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5617745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5619187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5620700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5622088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5623497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5624917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5626346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5627731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5629144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5630546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5632033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5633461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5636184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5638838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5641494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5644179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5646904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5649560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5652238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5654906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5658931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5661586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5664257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5666883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5669580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5672280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5675475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5678904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5681617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5684313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5686961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5689617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5692232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5696006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5699330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5702009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5704617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5707352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5710071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5712707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5716093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5719441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5722222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5724858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5727480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5730110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5733003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5736061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5739182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5741809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5744531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5747252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5749860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5752691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5755771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5758475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5761077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5763697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5766357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5769040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5771675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5774343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5776998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5779678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5782377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5785027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5787685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5790373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5793079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5796039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5798697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5801356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5803957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5806590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5809260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5811899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5814621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5817348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5820040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5822671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5825352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5828034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5830689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5833316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5835942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5838582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5841224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5843865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5846495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5849157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5851843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5854527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5857144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5859767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5862417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5865109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5867732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5870379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5873010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5875628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5878224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5880844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5883468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5886157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5889543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5892167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5894786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5897867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5900557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5903191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5905893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5908527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5911126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5913765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5916475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5919098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5921829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5924574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5927194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5929807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5932480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5935144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5937773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5940394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5943015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5945626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5948257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5950873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5953488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5956155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5958845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5961495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5964105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5966797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5976814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5979497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5982159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5984806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5987462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5990112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5992734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5995657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.5998462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6001170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6003760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6006397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6009135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6011835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6014437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6017083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6019749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6022383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6025017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6027639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6030272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6032877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6035512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6038192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6040816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6043479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6046137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6048803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6051432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6054059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6056667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6059275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6061917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6064529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6067101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6069822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6072502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6075112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6077739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6080421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6083074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6085685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6088299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6090920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6093552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6096499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6099140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6101774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6104522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6107241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6109867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6112522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6115231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6117989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6120657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6123291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6125925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6128529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6131148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6133751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6136410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6139102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6141784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6144407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6147035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6149696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6152357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6155012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6157653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6160284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6162902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6165537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6168192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6170828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6173537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6176229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6178887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6181500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6184120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6186788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6189473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6192133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6194746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6197685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6200358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6202990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6205608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6208229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6210994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6213678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6216295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6218920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6221680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6224386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6227041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6229675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6232328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6234968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6237593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6240239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6242884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6245569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6248235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6250869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6253528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6256202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6258859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6261499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6264124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6266807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6269418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6272044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6274685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6277314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6279985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6282679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6285324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6287983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6290648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6293322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6296344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6298990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6301636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6304261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6306899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6309526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6312126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6314855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6317550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6320222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6322853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6325532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6328219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6330841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6333443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6336065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6338723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6341331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6343919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6346624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6349294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6351989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6354672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6357369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6360003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6362692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6365376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6368015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6370744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6373391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6376086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6378708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6381352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6383989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6386665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6389354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6391998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6394608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6397704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6400425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6403057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6405687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6408300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6410925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6413563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6416202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6418812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6421553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6424285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6426906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6429534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6432236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6434946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6437598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6440230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6442847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6445493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6448107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6450724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6453360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6456065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6458723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6461340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6463972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6466653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6469327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6471967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6474566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 25%] 2024-08-07T18:08:31.6477256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6479877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6482557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6485188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6487831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6490494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6493136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6496106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6498786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6501553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6504261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6506882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6509480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6512087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6514692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6517303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6519978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6522598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6525202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6527891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6530593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6533222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6535824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6538510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6541188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6543826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6546465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6549098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6551738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6554377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6556992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6559649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6562367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6565076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6567723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6570348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6573003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6575679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6578294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6580923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6583589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6586218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6588815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6591438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6594105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6597049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6599744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6602378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6604996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6607677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6610348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6612970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6615608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6618235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6620915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6623527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6626173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6628812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6631473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6634140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6636762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6639356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6641994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6644666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6647296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6649902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6652506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6655126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6657768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6660378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6662987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6665595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6668280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6671021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6673627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6676267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6679005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6681707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6684314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6686922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6689587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6692207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6694885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6697834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6700462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6703116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6705798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6708430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6711060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6713745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6716444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6719034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6721707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6724331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6726927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6729542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6732196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6734802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6737453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6740136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6742784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6745406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6748061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6750757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6753393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6756013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6758631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6761238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6763852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6766455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6769055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6771709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6774386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6777028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6779622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6782279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6784951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6787578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6790176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6792797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6795705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6798341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6800929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6803551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6806198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6808921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6811623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6814236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6816848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6819527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6822242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6824881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6827505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6830111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6832734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6835343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6837974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6840575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6843236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6845915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6848513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6851116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6853762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6856434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6859064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6861703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6864296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6866916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6869553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6872203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6874814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6877475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6880189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6882816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6885429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6888096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6890768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6893364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6896221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6898884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6901523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6904136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6906771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6909387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6912067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6914763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6917392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6920089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6922855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6925617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6928282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6930946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6933665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6936380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6939052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6941746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6944449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6947163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6949893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6952597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6955295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6958005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6960716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6963358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6966039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6968764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6971450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6974140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6976827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6979513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6982196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6983695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6985134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6986572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6988047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6989553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6990976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6992415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6993836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6995562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6997029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6998468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.6999888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7001386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7002901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7004318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7005750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7007252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7008776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7010187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7011627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7013057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7014509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7015930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7017396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7018823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7020335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7021870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7023294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7024791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7026279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7027786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7029202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7030646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7032077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7033510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7034926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7036371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7037824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7039290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7040747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7042172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7043609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7045063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7046542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7047988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7049431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7050835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7052266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7053699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7055158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7056580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7058042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7059522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7061028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7062440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7063913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7065414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7066851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7068304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7069738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7071184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7072606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7074039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7075458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7076902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7078382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7079870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7081285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7082774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7084254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7085672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7087119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7088552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7090010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7091429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7092872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7094313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7096040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7097564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7099095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7100519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7102031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7103517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7104932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7106369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7107818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7109245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7110654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7112102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7113533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7114966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7116421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7117992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7119417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7120932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7122401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7123835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7125258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7126668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7128127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7129551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7130999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7132415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7133851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7135319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7136820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7138246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7139673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7141128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7142618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7144017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7145448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7146869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7148313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7149738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7151148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7152576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7154036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7155510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7156913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7158374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7159847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7161324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7162738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7164189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7165622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7167082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7168524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7169964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7171419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7172891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7174381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7175799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7177250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7178702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7180181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7181605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7183057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7184472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7185910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7187351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7188786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7190213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7191668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7193157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7194582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7196273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7197802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7199317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7200747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7202184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7203601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7205046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7206480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7207932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7209346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7210821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7212325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7213721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7215148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7216606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7218118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7219521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7220992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7222416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7223856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7225265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7226694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7228131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7229621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7231074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7232478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7233917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7235387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7236876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7238314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7239761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7241189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7242617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7244036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7245470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7246880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7248361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7249817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7251231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7252673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7254117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7255589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7257002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7258459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7259867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7261291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7262717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7264163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7265563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7267030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7268499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7269991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7271409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7272869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7274356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7275781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7277220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7278641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7280069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7281503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7282923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7284321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7285798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7287279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7288700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7290107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7291586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7293057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7294458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7296153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7297613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7299056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7300463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7301896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7303313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7304833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7306309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7307769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7309192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7310706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7312180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7313599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7315036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7316451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7317889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7319296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7320777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7322195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7323661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7325119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7326556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7327999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7329467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7330925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7332368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7333799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7335214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7336643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7338094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7339549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7340953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7342429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7343901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7345348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7346757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7348253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7350454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7351891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7353294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7354721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7356137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7357564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7359012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7360413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7361896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7363368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 26%] 2024-08-07T18:08:31.7364799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7366207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7367684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7369168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7370584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7372052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7373498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7374921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7376328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7377764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7379183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7380667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7382138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7383571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7385037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7386519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7387991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7389410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7390842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7392274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7393670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7395362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7396795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7398229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7399742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7401217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7402660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7404082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7405564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7407040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7408502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7409930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7411363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7412774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7414214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7415643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7417073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7418600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7420111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7421557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7422956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7424422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7425888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7427321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7428756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7430189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7431600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7433036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7434441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7435881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7437340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7438842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7440264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7441669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7443146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7444619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7446036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7447446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7448910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7450322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7451741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7453147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7454575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7456038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7457498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7458920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7460332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7461809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7463250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7464663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7466077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7467515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7468926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7470342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7471761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7473202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7474644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7476118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7477545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7479001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7480448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7481914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7483346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7484768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7486201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7487616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7489061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7490486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7491907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7493350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7494834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7496525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7497972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7499450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7500931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7502362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7503762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7505190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7506602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7508050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7509458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7510886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7512358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7513865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7515305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7516735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7518218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7519733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7521176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7522595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7524030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7525448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7526862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7528289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7529728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7531189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7532656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7534064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7535498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7536957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7538452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7539853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7541286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7542705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7544103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7545522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7546937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7548401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7549843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7551309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7552728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7554162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7555605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7557079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7558502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7559936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7561333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7562730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7564159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7565575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7566998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7568423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7569948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7571415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7572828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7574228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7575711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7577187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7578631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7580049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7581489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7582917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7584331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7585765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7587183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7588694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7590152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7591578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7592986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7594475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7596211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7597652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7599088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7600532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7601933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7603364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7604781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7606200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7607708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7609180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7610605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7612020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7613503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7614974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7616407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7617837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7619262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7620711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7622153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7623571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7624971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7626442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7627910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7629334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7630725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7632185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7633638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7635066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7636464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7637900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7639305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7640733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7642133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7643548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7644997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7646461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7647901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7649307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7650781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7652248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7653669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7655081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7656517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7657956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7659379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7660796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7662222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7663672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7665126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7666546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7667980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7669458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7670901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7672321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7673732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7675198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7676586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7678029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7679448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7680878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7682319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7683788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7685208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7686631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7688116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7689577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7691008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7692438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7693864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7695514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7696959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7698398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7699806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7701279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7702774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7704193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7705609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7707067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7708607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7710038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7711443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7712872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7714279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7715710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7717106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7718543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7719998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7721529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7722931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7724353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7725815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7727286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7728712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7730114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7731545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7732947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7734404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7735802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7737230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7738707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7740178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7741578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7742996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7744448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7745915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7747307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7748789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7750226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7751624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7753046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7754467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7755911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7757374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7758855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7760271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7761747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7763191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7764673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7766067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7767486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7768900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7770298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7771723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7773141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7774559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7776002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7777493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7778909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7780318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7781755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7783255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7784682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7786112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7787556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7788991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7790443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7791864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7793303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7794775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7796535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0004s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7797990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7799444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7800921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7802449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7803856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7805287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7806715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7808191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7809604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7811029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7812477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7813959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7815459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7816879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7818350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7819824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7821368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7822769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7824227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7825661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7827105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7828548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7829979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7831438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7832902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7834395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7835821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7837265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7838732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7840215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7841633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7843076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7844491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7845930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7847362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7848816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7850229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7851681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7853166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7854600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7856021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7857487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7858985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7860409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7861837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7863292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7864733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7866178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7867628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7869099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7870639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7872104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7873506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7874926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7876396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7877912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7879317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7880745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7882165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7883609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7885009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7886441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7887891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7889383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7890841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7892268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7893715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7895487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7897018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7898465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7899910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7901348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7902785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7904205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7905646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7907067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7908565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7910045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7911497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7912921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7914381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7915867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7917287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7918758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7920205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7921648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7923075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7924531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7925946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7927422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7928915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7930370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7931781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7933285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7934772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7936267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7937735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7939169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7940613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7942039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7943474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7944891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7946377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7947885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7949322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7950734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7952220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7953697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7955121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7956541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7957985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7959429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7960841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7962273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7963704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7965195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7966666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7968131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7969548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7971031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7972495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7973925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7975339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7976780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7978210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7979616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7981054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7982535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7983954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7985404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7986891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7988336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7989759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7991212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7992713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7994143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7995881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7997319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.7998778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8000239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8001658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8003095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8004619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8006141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8007554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8009092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8010622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8012133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8013539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8014974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8016396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8017841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8019267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8020718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 27%] 2024-08-07T18:08:31.8022160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8023655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8025135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8026545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8027983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8029464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8030933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8032346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8033793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8035237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8036664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8038097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8039545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8040969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8042428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8043915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8045327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8046756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8048215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8049690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8051108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8052553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8053960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8055383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8056806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8058273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8059678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8061135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8062623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8064050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8065480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8066937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8068453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8069888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8071326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8072747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8074192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8075626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8077060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8078499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8079976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8081445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8082904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8084329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8085796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8087297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8088733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8090170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8091591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8093028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8094438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8096156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8097589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8099142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8100620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8102044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8103467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8104955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8106448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8107864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8109329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8110761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8112192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8113607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8115044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8116452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8117912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8119382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8120854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8122279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8123751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8125210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8126629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8128070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8129477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8130904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8132325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8133772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8135175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8136645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8138133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8139579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8140984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8142460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8143925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8145348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8146778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8148204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8149630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8151045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8152464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8153858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8155326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8156790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8158226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8159633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8161117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8162581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8164002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8165415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8166840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8168301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8169760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8171200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8172620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8174112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8175570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8177007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8178454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8179939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8181401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8182835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8184257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8185679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8187091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8188530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8189977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8191394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8192857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8194319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8196056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8197495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8199024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8200506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8201956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8203372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8204801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8206212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8207636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8209114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8210524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8212012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8213501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8214951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8216362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8217842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8219318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8220797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8222203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8223621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8225032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8226470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8227876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8229294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8230767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8232255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8233683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8235090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8236572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8238038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8239471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8240880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8242328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8243752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8245178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8246593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8248045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8249536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8250991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8252426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8253836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8255311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8256757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8258187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8259602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8261046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8262441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8263863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8265281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8266716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8268185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8269643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8271076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8272491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8273951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8275407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8276835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8278276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8279707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8281112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8282538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8283959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8285384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8286830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8288327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8289737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8291129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8292587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8294045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8296424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8297979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8299439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8300848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8302274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8303677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8305101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8306698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8308236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8309676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8311082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8312614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8314121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8315598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8317034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8318485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8319938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8321372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8322789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8324223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8325689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8327161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8328586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8330020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8331491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8332952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8334377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8335801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8337257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8338666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8340098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8341524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8342971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8344424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8345902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8347338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8348785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8350237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8351725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8353145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8354572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8356009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8357454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8358891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8360308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8361732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8363182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8364674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8366102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8367544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8368996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8370486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8371902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8373308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8374736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8376153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8377622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8379031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8380457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8381918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8383425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8384829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8386251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8387738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8389230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8390625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8392049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8393461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8394874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8396790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8398229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8399662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8401165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8402655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8404061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8405487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8406955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8408458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8409855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8411293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8412724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8414213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8415670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8417104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8418632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8420040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8421517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8422986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8424428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8425839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8427368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8428846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8430283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8431690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8433112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8434535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8435965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8437389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8438810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8440294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8441768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8443204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8444621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8446102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8447585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8449010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8450424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8451865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8453285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8454714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8456128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8457552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8459052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8460512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8461937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8463347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8464827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8466263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8467695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8469109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8470540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8471937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8473357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8474772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8476200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8477666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8479121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8480544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8481961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8483430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8484879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8486314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8487763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8489187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8490590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8492026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8493450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8494866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8496830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8498341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8499776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8501168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8502641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8504113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8505551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8506954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8508403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8509810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8511241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8512641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8514058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8515562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8517053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8518476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8519881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8521360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8522839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8524259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8525672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8527108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8528548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8529970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8531383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8532802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8534256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8535719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8537185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8538620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8540097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8541542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8542957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8544367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8545814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8547214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8548633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8550041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8551470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8552900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8554365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8555784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8557222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8558669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8560120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8561545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8562964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8564387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8565793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8567215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8568620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8570027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8571462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8572934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8574343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8575767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8577213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8578670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8580092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8581487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8582910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8584326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8585775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8587179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8588603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8590063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8591555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8592959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8594379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8596382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8597937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8599335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8600749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8602179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8603590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8605004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8606427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8607869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8609343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8610831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8612237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8613660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8615120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8616653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8618051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8619463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8620902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8622305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8623729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8625149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8626614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8628057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8629536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8630950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8632383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8633828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8635304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8636723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8638154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8639549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8640946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8642374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8643788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8645201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8646615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8648088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8649554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8650961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8652358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8653837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8655304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8656744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8658154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8659573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8661009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8662416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8663839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8665246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8666749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8668218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8669642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8671049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8672520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8673960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8675377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8676813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 28%] 2024-08-07T18:08:31.8678252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8679645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8681055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8682487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8683896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8685406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8686878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8688307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8689712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8691169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8692616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8694045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8695895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8697365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8698759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8700165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8701601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8702995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8704488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8705955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8707395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8708781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8710244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8711707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8713129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8714523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8715980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8717397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8718818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8720212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8721611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8723084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8724606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8726028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8727441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8728921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8730388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8731798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8733206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8734642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8736054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8737473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8738884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8740290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8741773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8743214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8744632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8746039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8747517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8748961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8750376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8751782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8753217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8754611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8756043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8764765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8766361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8767890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8769391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8770819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8772257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8773718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8775209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8776624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8778048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8779480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8780888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8782315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8783751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8785170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8786568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8788040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8789507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8790924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8792324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8793806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8795738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8797176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8798607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8800020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8801451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8802844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8804284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8805695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8807232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8808703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8810120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8811536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8813026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8814523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8815975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8817407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8818813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8820220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8821616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8823049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8824475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8825932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8827373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8828800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8830204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8831654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8833099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8834556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8835976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8837378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8838800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8840224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8841659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8843064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8844556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8846027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8847469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8848874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8850340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8851797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8853217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8854635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8856041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8857461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8858870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8860282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8861680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8863150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8864637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8866049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8867448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8868910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8870371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8871775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8873171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8874628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8876040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8877435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8878858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8880270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8881743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8883189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8884628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8886021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8888199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8889679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8891082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8892488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8893921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8895809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8897247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8898679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8900093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8901585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8903052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8904487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8905918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8907391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8908887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8910311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8911729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8913145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8914548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8916042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8917464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8918863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8920277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8921721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8923197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8924582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8926007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8927450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8928915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8930305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8931715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8933118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8934545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8935961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8937361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8938780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8940236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8941697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8943093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8944524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8946006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8947470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8948864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8950281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8951691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8953100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8954502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8955921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8957336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8958757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8960232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8961638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8963056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8964494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8965962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8967357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8968776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8970161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8971564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8972969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8974398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8975811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8977250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8978710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8980120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8981520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8982958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8984424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8985845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8987257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8988660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8990071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8991470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8992873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8994258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8996134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8997656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.8999051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9000448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9001906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9003395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9004780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9006204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9007621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9009049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9010438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9011859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9013270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9014729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9016258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9017674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9019095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9020554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9022017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9023417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9024830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9026257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9027662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9029052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9030477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9031887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9033294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9034800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9036277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9037694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9039084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9040533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9042004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9043448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9044854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9046305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9047727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9049176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9050590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9052022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9053493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9054980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9056402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9057821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9059300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9060763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9062182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9063588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9065026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9066450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9067871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9069280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9070710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9072175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9073668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9075089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9076511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9077995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9079453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9080877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9082302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9083746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9085175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9086610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9088038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9089472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9090927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9092405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9093812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9095647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9097168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9098663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9100102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9101531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9102954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9104363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9105826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9107257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9108675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9110140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9111647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9113052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9114473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9115995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9117464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9118902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9120309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9121734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9123142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9124579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9126012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9127432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9128885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9130364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9131755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9133173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9134623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9136117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9137517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9138928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9140358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9141787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9143209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9144613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9146056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9147518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9148974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9150386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9151823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9153315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9154801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9156233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9157683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9159115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9160524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9161957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9163376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9164804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9166245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9167718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9169138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9170574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9172035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9173502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9174922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9176370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9177774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9179180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9180606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9182033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9183455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9184930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9186421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9187850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9189273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9190731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9192223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9193646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9195436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9199577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9202861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9204290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9205690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9207149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9208587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9210051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9211465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9212879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9214374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9215954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9217373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9218776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9220228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9221757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9223246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9224659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9226100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9227524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9228953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9230393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9231832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9233302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9234782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9236199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9237634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9239052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9240533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9242035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9243448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9244875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9246278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9247692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9249103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9250557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9252000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9253472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9254894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9256383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9257790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9259246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9260751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9262181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9263604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9265020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9266459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9267928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9269350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9270851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9272322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9273766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9275168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9276597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9278071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9279552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9280989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9282411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9283855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9285280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9286709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9288123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9289621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9291091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9292512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9293927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9295814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9297373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9298876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9300307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9301736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9303183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9304594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9306035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9307445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9308940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9310419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9311840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9313254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9314692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9316201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9317686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9319102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9320549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9321973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9323382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9324826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9326254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9327720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9329225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9330688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9332117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9333543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9335001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9336492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9337917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9339348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9340788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9342202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9343639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9345039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 29%] 2024-08-07T18:08:31.9346453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9347908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9349398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9350827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9352256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9353719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9355213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9356612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9358043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9359466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9360907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9362329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9363739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9365184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9366657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9368137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9369556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9371022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9372496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9373977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9375387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9376822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9378240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9379655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9381085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9382500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9383939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9385384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9386880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9388299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9389737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9391206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9392683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9394093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9395987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9397423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9398841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9400249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9401706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9403105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9404595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9406117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9407543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9408969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9410440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9411943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9413354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9414769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9416207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9417643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9419060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9420484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9421905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9423363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9424849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9426249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9427671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9429141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9430635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9432043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9433473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9434896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9436343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9437747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9439185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9440617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9442107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9443565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9444980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9446404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9447869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9449332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9450748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9452184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9453603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9455026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9456433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9457868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9459289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9460779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9462242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9463665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9465108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9466512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9467988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9469463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9470932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9472350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9473778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9475196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9476648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9478058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9479531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9481011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9482449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9483844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9485252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9486730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9488205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9489628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9491066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9492499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9493919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9495746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9497187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9498710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9500209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9501658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9503068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9504493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9506006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9507481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9508921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9510351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9511825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9513240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9514684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9516142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9517627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9519082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9520506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9521951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9523395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9524865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9526350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9527784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9529213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9530644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9532073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9533513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9534935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9536396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9537847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9539287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9540710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9542148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9543601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9545085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9546505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9547919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9549350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9550756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9552218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9553612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9555023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9556475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9557955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9559353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9560772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9562239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9563722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9565116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9566521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9567962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9569389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9570818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9572243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9573686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9575160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9576643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9578066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9579507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9580998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9582474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9583887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9585327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9586752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9588155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9589574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9591003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9592449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9593895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9595821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9597275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9598713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9600215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9601724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9603148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9604657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9606062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9607525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9608965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9610393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9611837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9613311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9614815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9616285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9617726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9619190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9620669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9622109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9623530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9624935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9626377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9627797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9629206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9630637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9632170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9633665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9635076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9636503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9637959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9639437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9640839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9642289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9643707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9645149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9646553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9647965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9649397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9650863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9652363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9653782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9655208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9656666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9658129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9659523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9660951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9662394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9663813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9665214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9666646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9668064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9669501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9670987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9672438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9673887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9675296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9676770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9678238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9679688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9681103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9682559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9683982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9685431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9686842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9688303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9689784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9691203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9692637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9694047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9696006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9697534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9698957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9700373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9701810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9703243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9704669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9706079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9707566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9709040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9710439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9711860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9713295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9714778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9716275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9717712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9719133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9720573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9721977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9723419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9724827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9726299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9727738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9729138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9730563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9731975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9733437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9734886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9736313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9737734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9739150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9740552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9742001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9743422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9744837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9746296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9747794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9749218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9750649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9752129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9753591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0004s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9755026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9756432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9757856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9759259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9760682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9762076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9763495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9764942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9766425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9767824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9769222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9770698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9772188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9773671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9775076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9776509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9777925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9779348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9780747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9782199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9783664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9785141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9786555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9787992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9789469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9790916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9792358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9793786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9795703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9797134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9798550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9799957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9801383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9802885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9804375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9805783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9807214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9808700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9810177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9811600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9813036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9814447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9815890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9817330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9818753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9820172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9821627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9823166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9824580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9825995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9827394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9828876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9830328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9831713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9833149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9834557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9835981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9837374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9838788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9840228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9841706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9843117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9844526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9845937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9848010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9849491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9850893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9852331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9853773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9855196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9856607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9858049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9859511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9860975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9862378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9863823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9865232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9866691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9868136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9869562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9870979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9872375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9873814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9875227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9876657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9878097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9879566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9880976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9882414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9883836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9885319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9886787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9888223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9889627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9891038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9892470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9893919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9895771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9897220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9898735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9900219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9901641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9903040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9904554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9906028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9907446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9908852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9910288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9911704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9913104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9914550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9916015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9917505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9918962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9920391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9921803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9923288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9924798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9926229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9927648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9929090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9930487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9931892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9933321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9934746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9936204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9937650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9939087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9940498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9941965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9943410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9944852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9946268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9947687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9949091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9950513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9951930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9953330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9954787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9956259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9957702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9959099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9960564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9962023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9963469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9964869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9966299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9967703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9969130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9970523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9971919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9973402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9974866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9976277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9977672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9979096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9980545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9982005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9983426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9984868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9986290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9987710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9989118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9990538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9992017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9993484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9994912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9996750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9998191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:31.9999678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:32.0001184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 30%] 2024-08-07T18:08:32.0002593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0004041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0005435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0006853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0008271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0009704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0011157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0012627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0014070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0015488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0016953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0018402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0019883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0021304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0022732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0024162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0025599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0027026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0028454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0029921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0031388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0032829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0034262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0035700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0037160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0038634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0040030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0041454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0042863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0044319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0045726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0047143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0048557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0050029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0051479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0052888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0054336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0055794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0057256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0058658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0060092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0061510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0062928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0064357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0065782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0067199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0068662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0070115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0071518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0072942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0074399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0075854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0077260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0078692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0080072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0081488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0082889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0084345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0085738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0087194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0088662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0090100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0091500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0092952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0094470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0096321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0097768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0099173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0100604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0102020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0103438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0104869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0106377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0107856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0109268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0110662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0112069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0113551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0115030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0116486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0117906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0119341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0120738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0122160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0123567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0125064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0126513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0127932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0129344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0130781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0132236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0133689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0135136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0136555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0137972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0139372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0140796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0142207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0143661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0145117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0146540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0147955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0149372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0150812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0152266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0153695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0155112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0156528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0157930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0159364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0160757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0162169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0163626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0165127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0166523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0167952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0169424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0170891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0172306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0173715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0175178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0176582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0177989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0179380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0180795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0182247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0183708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0185122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0186548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0187996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0189450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0190845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0192256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0193686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0195440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0196918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0198334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0199770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0201242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0202732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0204138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0205584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0207036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0208518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0209909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0211310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0212714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0214103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0215534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0216991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0218404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0219843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0221317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0222724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0224141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0225551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0227051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0228504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0229915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0231318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0232729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0234164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0235578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0236997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0238446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0239947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0241344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0242761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0244162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0245634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0247072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0248482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0249888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0251295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0252704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0254102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0255533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0256979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0258435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0259833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0261262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0262674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0264128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0265591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0267022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0268443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0269849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0271273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0272687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0274145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0275548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0277012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0278465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0279892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0281282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0282738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0284201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0285633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0287031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0288449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0289851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0291264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0292677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0294090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0296026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0297571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0299004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0300417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0301955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0303449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0304895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0306321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0307768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0309205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0310622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0312058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0313477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0314997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0316498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0317940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0319364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0320859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0322315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0323757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0325202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0326645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0328051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0329486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0330912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0332343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0333816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0335308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0336763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0338198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0339674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0341142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0342583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0344016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0345472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0346891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0348324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0349751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0351177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0352626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0354097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0355569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0356981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0358455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0359924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0361369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0362776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0364205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0365644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0367084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0368496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0369929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0371407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0372891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0374313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0375747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0377228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0378712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0380137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0381551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0382979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0384395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0385829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0387237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0388674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0390138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0391607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0393018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0394434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0396443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0397943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0399366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0400797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0402254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0403669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0405117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0406554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0408009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0409476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0410976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0412413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0413868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0415339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0416856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0418292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0419720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0421148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0422564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0424009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0425455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0426883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0428342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0429830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0431254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0432685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0434102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0435622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0437133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0438544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0439980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0441407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0442858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0444275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0445787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0447253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0448751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0450163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0451598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0453016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0454507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0455984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0457413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0458859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0460285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0461711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0463128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0464569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0466061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0467546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0468958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0470400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0471823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0473286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0474741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0476204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0477636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0479044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0480473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0481894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0483339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0484787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0486287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0487703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0489135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0490531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0491987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0493449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0494886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0496708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0498122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0499556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0500980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0502399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0503887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0505393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0506839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0508271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0509684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0511189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0512684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0514118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0515546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0517040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0518481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0519905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0521346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0522808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0524356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0525774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0527207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0528628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0530116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0531575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0533006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0534432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0535899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0537308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0538724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0540349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0541829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0543300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0544714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0546176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0547603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0549074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0550534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0551974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0553403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0554834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0556267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0557699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0559114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0560546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0562011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0563425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0564861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0566286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0567755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0569219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0570658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0572055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0573478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0574896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0576364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0577768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0579186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0580646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0582121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0583543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0584951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0586457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0587927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0589354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0590769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0592198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0593607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0595316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0596867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0598306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0599815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0601298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0602763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0604179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0605668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0607150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0608570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0609989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0611424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0612823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0614250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0615664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0617173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0618623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0620110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0621530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0622958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0624425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0625886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0627324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0628743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0630153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0631552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0632985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0634400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0635816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0637272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0638751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0640170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0641572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0643040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0644502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0645930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0647329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0648747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0650161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0651595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0652998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0654427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0655905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0657398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 31%] 2024-08-07T18:08:32.0658800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0660227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0661679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0663155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0664567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0666022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0667451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0668866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0670287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0671693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0673121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0674574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0676046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0677451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0678891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0680310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0681752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0683226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0684648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0686114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0687528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0688960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0690385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0691832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0693275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0694758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0696642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0698097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0699500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0700997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0702477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0703900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0705329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0706761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0708201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0709623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0711060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0713819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0716574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0719274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0721964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0724617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0727344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0730077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0733194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0736533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0739199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0741896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0744561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0747210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0749935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0753216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0756493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0759135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0761837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0764616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0767333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0769954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0773292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0776751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0779488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0782162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0784860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0787606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0790318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0793493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0797411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0800114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0802919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0805666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0808335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0811232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0814323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0817413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0820082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0822777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0825500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0828187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0830834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0833526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0836175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0838866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0841613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0844255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0846905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0849547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0852195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0854859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0857507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0860183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0862884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0865541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0868210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0870854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0873555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0876267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0878901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0881546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0884188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0886845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0889479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0892096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0894753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0898591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0901377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0904018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0906662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0909378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0912069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0914706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0917424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0920102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0922779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0925430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0928068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0930756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0933516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0936231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0938907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0941607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0944303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0947018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0949691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0952377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0955029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0957664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0960315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0962966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0965621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0968340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0971068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0973739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0976416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0979095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0981798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0984475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0987135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0989816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0992482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0995575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.0998328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1001051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1003826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1006573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1009261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1011896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1014610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1017381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1020029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1022678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1025320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1028010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1030672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1033326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1035986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1038706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1041410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1044057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1046712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1049436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1052122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1054748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1057419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1060088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1062719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1065385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1068048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1070716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1073419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1076137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1078766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1081412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1084086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1086753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1089436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1092087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1094708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1097782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1100425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1103107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1105757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1108473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1111213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1113868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1116600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1119260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1122002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1124746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1127404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1130058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1132722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1135393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1138078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1140750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1143448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1146152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1148797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1151435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1154091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1156821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1159556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1162207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1164893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1167556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1170186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1172841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1175515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1178220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1180916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1183575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1186212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1188880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1191596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1194297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1197395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1200098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1202724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1205359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1207998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1210667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1213390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1216157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1218800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1221450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1224082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1226774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1229472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1232126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1234764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1237394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1240039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1242700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1245354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1247989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1250699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1253405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1256052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1258692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1261392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1264123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1266771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1269423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1272084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1274734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1277382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1280033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1282691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1285387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1288111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1290764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1293407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1296651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1299433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1302048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1304694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1307376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1310024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1312660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1315320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1318042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1320736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1323452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1326158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1328818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1331524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1334215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1336835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1339480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1342111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1344771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1347409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1350081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1352717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1355385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1358077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1360733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1363396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1366071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1368734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1371393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1374034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1376669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1379335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1382015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1384646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1387351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1390041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1392744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1395824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1398509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1401168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1403877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1406604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1409221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1411858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1414511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1417184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1419822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1422465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1425168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1427877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1430500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1433158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1435797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1438506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1441198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1443842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1446513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1449160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1451782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1454436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1457126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1459915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1470448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1473135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1475808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1478454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1481209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1483945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1486603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1489225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1491858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1494525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1497684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1500322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1502969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1505720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1508486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1511157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1513894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1516689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1519455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1522151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1524830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1527507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1530200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1532881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1535538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1538198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1540892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1543600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1546254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1548951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1551697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1554403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1557049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1559717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1562391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1565054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1567736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1570396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1573055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1575798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1578513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1581190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1583888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1586604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1589304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1591988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1594729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1597921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1600593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1603289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1605962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1608604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1611349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1614119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1616863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1619531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1622273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1624989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1627670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1630345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1633029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1635699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1638374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1641009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1643655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1646357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1649112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1651812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1654465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1657156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1659872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1662530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1665183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1667849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1670524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1673168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1675811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1678461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1681173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1683899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1686545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1689227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1691920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1694615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1697701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1700378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1703092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1705763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1708424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1711107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1713797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1716594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1719350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1722031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1724706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1727465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1730219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1732891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1735564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1738227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1740876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1743525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1746237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1748909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1751606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1754327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1757026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1759661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1762396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1765118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1767801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1770473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1773129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1775809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1778497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1781194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1783882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1786601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1789339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1792000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1794671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1797896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1800675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1803364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1806020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1808686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1811345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1814068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1817167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1819893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1822693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1825430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1828085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1830741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1833473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1836183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1838865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1841535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1844190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1846855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 32%] 2024-08-07T18:08:32.1849530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1852201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1854882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1857613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1860314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1862971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1865681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1868323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1871003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1873706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1876413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1879046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1881682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1884333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1887006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1889651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1892325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1895362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1898176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1901038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1903797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1906597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1909374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1912170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1915460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1918187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1920886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1923576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1927066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1929874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1932611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1935649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1938315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1941939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1944722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1947462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1950102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1953416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1957315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1960039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1962713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1965920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1969154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1972981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1976227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1979253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1982382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1985636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1988701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1991347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1994030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1997183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.1999850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2002510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2005166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2007880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2010588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2013238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2015939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2018641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2021352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2024053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2026712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2029370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2032016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2034693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2037374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2040015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2042782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2045553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2048226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2050890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2053566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2056264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2058989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2061665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2064326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2066986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2069672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2072353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2074972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2077666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2080384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2083035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2085686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2087099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2088622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2090082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2091504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2092926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2094370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2096194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2097635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2099101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2100603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2102099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2103512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2104957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2106382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2107873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2109371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2110802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2112230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2113647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2115051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2116518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2117960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2119388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2120863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2122333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2123772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2125177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2126649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2128105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2129563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2130965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2132390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2133805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2135248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2136648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2138057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2139553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2141028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2142456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2143870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2145344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2146800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2148215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2149637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2151066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2152478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2153896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2155304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2156712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2158190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2159678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2161103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2162525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2164015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2165471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2166899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2168321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2169780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2171189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2172616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2174042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2175490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2176943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2178417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2179853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2181271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2182732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2184180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2185622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2187061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2188493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2189908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2191533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2192987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2194409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2196367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2197937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2199406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2201133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2202645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2204127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2205590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2206995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2208420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2209859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2211302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2212711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2214136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2215588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2217125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2218523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2219964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2221399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2222861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2224322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2225730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2227167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2228583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2230024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2231430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2232862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2234368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2235867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2237270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2238709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2240159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2241605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2243080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2244496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2245943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2247345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2248770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2250202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2251631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2253057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2254526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2255936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2257367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2258766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2260246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2261700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2263113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2264531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2265942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2267362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2268770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2270205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2271643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2273129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2274540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2275952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2277357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2278830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2280312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2281714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2283140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2284584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2286009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2287400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2288812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2290274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2292273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2294289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2296900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2299115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2301526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2303585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2305412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2307517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2309682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2311717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2313902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2316023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2318140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2320324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2322648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2324441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2326484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2328591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2330792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2332257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2333683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2335077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2336498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2337915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2339355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2340753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2342249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2343725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2345167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2346566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2348032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2349496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2350932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2352347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2353775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2355190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2356615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2358036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2359445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2360923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2362422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2363847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2365256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2366728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2368223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2369632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2371037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2372490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2373905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2375325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2376731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2378145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2379614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2381063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2382511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2383923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2385412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2386855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2388273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2389685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2391120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2392540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2393957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2395802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2397257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2398757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2400237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2401655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2403087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2404558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2406017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2407435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2408842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2410254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2411650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2413101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2414515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2415926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2417409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2418885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2420325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2421728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2423924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2425441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2426879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2428283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2429715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2431137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2432578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2434022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2435457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2436910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2438376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2439791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2441190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2442623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2444102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2445564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2446970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2448406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2449824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2451241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2452652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2454102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2455550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2457028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2458430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2459844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2461277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2462715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2464201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2465612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2467053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2468454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2469875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2471281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2472708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2474150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2475610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2477012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2478426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2479836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2481276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2482748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2484170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2485585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2486980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2488401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2489824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2491237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2492670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2494154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2496023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2497474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2498876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2500380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2501875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2503304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2504733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2506141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2507571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2508963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2510381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2511785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2513277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2514739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2516156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2517599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2519064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2520522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2521921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2523368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2524788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2526206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2527647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2529077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2530495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2531968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2533437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2534927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2536346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2537815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2539261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2540665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2542095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2543508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2544930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2546337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2547768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2549166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2550624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2552085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2553535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2554934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2556401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2557855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2559266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2560676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2562075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2563522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2564936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2566354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2567758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2569224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2570687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2572103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2573530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2574998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2576446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2577852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2579245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2580648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2582077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2583498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2584918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2586316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2587780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2589227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2590641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2592049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2593569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2595005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2597225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2598642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2600065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2601484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2602888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2604352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2605773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2607267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2608739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2610169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2611573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2612987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2614462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2615941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2617392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2618824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2620226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2621635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2623070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2624485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2625944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2627401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2628832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2630223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2631642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 33%] 2024-08-07T18:08:32.2633094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2634598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2635997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2637421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2638830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2640242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2641657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2643057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2644541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2645987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2647394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2648786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2650207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2651663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2653118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2654536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2655965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2657372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2658768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2660192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2661603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2663035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2664492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2665962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2667380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2668809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2670319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2671789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2673192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2674654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2676053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2677475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2678878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2680292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2681702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2683134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2684630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2686040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2687450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2688891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2690361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2691771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2693181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2694597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2696507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2697943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2699339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2700752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2702235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2703738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2705154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2706571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2707978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2709458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2710906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2712324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2713723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2715164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2716592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2717995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2719411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2720861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2722317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2723708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2725147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2726554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2728001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2729435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2730862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2732277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2733693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2735113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2736548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2738019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2739458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2741001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2742406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2743847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2745260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2746731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2748175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2749600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2750993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2752408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2753807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2755254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2756642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2758027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2759486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2760945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2762353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2763751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2765243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2766702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2768113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2769513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2770939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2772354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2773774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2775189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2776616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2778073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2779522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2780944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2782344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2783816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2785260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2786666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2788069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2789497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2790889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2792302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2793712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2795565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2797299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2798772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2800191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2801596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2803064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2804521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2805956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2807370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2808782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2810178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2811604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2813022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2814437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2815905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2817400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2818826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2820210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2821623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2823070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2824547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2825944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2827359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2828757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2830174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2831567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2832978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2834424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2835922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2837315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2838714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2840145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2841605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2843066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2844470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2845911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2847331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2848753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2850159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2851576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2853022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2854472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2855864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2857271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2858698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2860139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2861594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2863005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2864456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2865846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2867260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2868661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2870081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2871472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2872923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2874396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2875811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2877280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2878727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2880209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2881620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2883034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2884514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2885933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2887336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2888737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2890124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2891580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2893038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2894462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2896156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2897646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2899134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2900528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2901936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2903336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2904784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2906174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2907596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2908996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2910478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2911942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2913357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2914781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2916197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2917692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2919141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2920556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2921965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2923367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2924758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2926182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2927589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2929034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2930483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2931899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2933297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2934712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2936166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2937621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2939042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2940434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2941852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2943258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2944713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2946109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2947563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2949015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2950441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2951829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2953247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2954728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2956180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2957578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2958970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2960390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2961793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2963197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2964607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2966025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2967468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2968973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2970364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2971779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2973226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2974668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2976094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2977496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2978931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2980315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2981724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2983126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2984573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2986001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2987465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2988856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2990262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2991689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2993115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2994531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2996389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2997815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.2999198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3000611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3002010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3003406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3004861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3006366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3007774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3009179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3010566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3012023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3013512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3014901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3016320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3017779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3019210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3020610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3022024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3023463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3024945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3026323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3027732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3029134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3030602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3032033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3033426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3034848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3036266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3037672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3039068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3040500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3041916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3043380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3044850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3046291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3047712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3049181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3050640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3052061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3053510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3054930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3056359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3057774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3059207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3060604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3062067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3063531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3064985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3066391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3067869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3069330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3070767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3072175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3073585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3075041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3076465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3077886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3079299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3080786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3082264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3083691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3085125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3086616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3088090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3089516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3090928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3092339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3093772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3095453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3096933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3098358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3099882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3101354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3102778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3104187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3105814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3107341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3108761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3110171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3111602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3112997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3114396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3115857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3117319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3118784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3120240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3121671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3123086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3124549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3126023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3127441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3128846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3130257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3131644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3133047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3134480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3135900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3137361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3138811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3140241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3141636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3143099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3144557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3146016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3147416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3148842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3150258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3151700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3153117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3154525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3156022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3157496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3158927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3160338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3161769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3163216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3164691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3166108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3167539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3168957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3170372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3171779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3173195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3174672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3176136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3177559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3178974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3180410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3181852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3183323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3184741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3186200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3187606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3189034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3190456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3191899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3193342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3194819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3196593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3198007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3199422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3200898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3202400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3203816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3205250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3206661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3208092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3209517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3210936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3212391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3213885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3215309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3216754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3218176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3219641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3221128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3222525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3223956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3225389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3226849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3228248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3229672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3231158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3232633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3234019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3235430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3236853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3238320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3239778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3241175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3242605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3244017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3245449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3246850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3248282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3249697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3251159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3252616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3254050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3255488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3256940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3258415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3259832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3261280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3262681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3264111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3265544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3266973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3268364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3269854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3271318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3272758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3274150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3275617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3277091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3278507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3279926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3281331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3282758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3284167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 34%] 2024-08-07T18:08:32.3285600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3287035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3288518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3289984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3291403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3292808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3294283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3296022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3297440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3298873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3300279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3301697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3303084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3304498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3305929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3307437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3308912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3310331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3311737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3313224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3314677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3316095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3317566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3318983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3320398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3321799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3323236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3324660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3326138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3327589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3329022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3330432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3331929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3333411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3334834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3336254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3337643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3339063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3340463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3341897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3343289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3344751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3346220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3347650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3349042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3350457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3351901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3353378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3354767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3356183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3357614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3359029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3360453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3361859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3363335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3364814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3366234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3367632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3369058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3370503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3371958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3373347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3374795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3376206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3377599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3379014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3380422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3382635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3384119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3385548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3386949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3388377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3389816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3391276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3392677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3394110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3395856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3397288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3398716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3400155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3401661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3403127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3404548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3405971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3407381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3408840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3410326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3411736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3413151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3414545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3415979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3417441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3418835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3420247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3421725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3423212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3424608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3426039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3427495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3428970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3430369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3431794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3433205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3434688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3436119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3437527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3438945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3440391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3441848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3443245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3444671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3446150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3447626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3449024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3450447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3451853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3453260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3454658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3456102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3457534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3458999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3460486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3461902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3463340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3464783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3466276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3467682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3469119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3470517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3471939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3473342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3474773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3476190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3477624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3479096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3480506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3481918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3483314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3484779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3486258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3487667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3489065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3490494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3491903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3493318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3494714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3496589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3498114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3499511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3500938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3502352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3503840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3505308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3506753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3508161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3509589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3510974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3512395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3513801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3515270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3516768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3518175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3519597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3521003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3522484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3523940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3525363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3526793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3528200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3529592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3531015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3532421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3533874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3535321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3536748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3538179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3539573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3541035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3542476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3543887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3545278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3546703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3548102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3549531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3550922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3552322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3553765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3555276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3556680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3558074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3559545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3561058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3562466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3563873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3565306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3566735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3568153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3569560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3570994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3572458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3573931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3575337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3576756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3578223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3579711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3581126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3582537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3583999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3585391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3586822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3588231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3589659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3591095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3592579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3594015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3595735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3597240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3598752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3600190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3601609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3603025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3604432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3605889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3607309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3608729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3610195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3611688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3613095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3614503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3615949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3617458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3618889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3620289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3621703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3623112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3624538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3625944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3627367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3628808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3630285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3631676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3633086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3634492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3635981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3637411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3638804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3640234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3641640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3643046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3644450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3645883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3647328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3648783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3650165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3651581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3652991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3654433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3655919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3657322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3658746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3660133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3661542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3662955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3664389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3665862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3667330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3668739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3670172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3671568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3673032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3674490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3675945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3677354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3678763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3680186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3681599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3683006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3684446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3685943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3687353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3688762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3690187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3691661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3693119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3694533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3696310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3697731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3699161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3700554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3701963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3703363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3704876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3706366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3707785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3709188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3710677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3712140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3713552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3714950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3716376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3717831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3719225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3720643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3722048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3723522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3724975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3726414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3727819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3729267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3730716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3732133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3733540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3734944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3736384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3737793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3739225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3740616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3742081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3743538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3744967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3746381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3747845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3749281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3750689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3752069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3753478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3754878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3756303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3757709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3759101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3760558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3762008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3763416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3764807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3766255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3767700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3769153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3770542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3771970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3773378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3774787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3776212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3777620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3779095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3780536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3781953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3783351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3784771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3786219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3787668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3789103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3790531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3791918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3793327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3794732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3796450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3797951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3799411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3800819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3802222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3803624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3805092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3806601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3808004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3809418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3810812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3812230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3813636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3815026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3816459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3817981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3819449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3820831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3822231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3823673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3825142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3826542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3827952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3829349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3830760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3832137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3833542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3834948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3836420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3837881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3839277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3840696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3842152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3843610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3845003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3846451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3847861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3849270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3850669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3852091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3853495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3854921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3856416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3857818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3859240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3860629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3862126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3863573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3864992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3866399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3867805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3869212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3870646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3872038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3873477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3874972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3876418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3877831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3879231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3880698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3882158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3883563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3884969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3886410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3887812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3889221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3890612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3892073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3893533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3894918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3896667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3898072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3899583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3901042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3902456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3903864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3905302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3906706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3908126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3909534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3911021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3912482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3913886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3915315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3916781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3918249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3919702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3921117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3922522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3923981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3925373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3926808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3928214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3929619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3931073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3932542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3933951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3935352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 35%] 2024-08-07T18:08:32.3936845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3938293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3939717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3941114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3942524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3943922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3945344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3946735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3948146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3949609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3951093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3952487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3953897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3955386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3956833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3958226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3959608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3961017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3962419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3963819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3965206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3966631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3968072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3969518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3970901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3972323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3973733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3975163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3976629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3978032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3979468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3980859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3982280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3983696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3985152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3986587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3988060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3989454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3990871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3992255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3993692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3995437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3996909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3998330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.3999724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4001143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4002551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4003962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4005467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4006962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4008366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4009777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4011169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4012651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4014145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4015546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4017011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4018426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4019852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4021256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4022671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4024058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4025535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4026973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4028376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4029774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4031242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4032678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4034069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4035501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4036913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4038315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4039704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4041123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4042526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4043966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4045415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4046842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4048245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4049695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4051128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4052546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4053955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4055358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4056775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4058172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4059591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4060968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4062409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4063860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4065285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4066668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4068125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4069558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4070974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4072357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4073750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4075189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4076599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4078018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4079423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4080892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4082355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4083774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4085189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4086618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4088083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4089544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4090948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4092348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4093772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4095484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4096922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4098327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4099848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4101307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4102721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4104123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4105571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4107029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4108502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4109906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4111333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4112718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4114107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4115554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4117009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4118467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4119909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4121329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4122740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4124154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4125616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4127074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4128470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4129874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4131259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4132659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4134078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4135474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4136881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4138316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4139791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4141181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4142591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4144087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4145577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4146970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4148382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4149786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4151196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4152605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4154014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4155458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4156909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4158446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4159846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4161256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4162700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4164148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4165551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4166976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4168378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4169782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4171165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4172572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4173993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4175448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4176908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4178313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4179738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4181126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4182575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4184030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4185474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4186875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4188293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4189699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4191116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4192532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4193973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4195704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4197123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4198525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4199909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4201401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4202882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4204287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4205700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4207115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4208514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4209902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4211316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4212708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4214179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4215644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4217094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4218498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4219961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4221388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4222794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4224198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4225640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4227024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4228417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4229823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4231223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4232676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4234105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4235536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4236936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4238374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4239800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4241209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4242605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4244003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4245396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4253933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4255529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4256961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4258464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4259955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4261367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4262777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4264198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4265644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4267126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4268524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4269946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4271344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4272781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4274198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4275605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4277043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4278510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4279896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4281306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4282715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4284154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4285599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4286987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4288433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4289864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4291289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4292724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4294167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4296095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4297641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4299056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4300502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4301932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4303439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4304960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4306372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4307816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4309212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4310633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4312051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4313517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4314961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4316436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4317889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4319334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4320736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4322207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4323692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4325117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4326550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4327962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4329400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4330831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4332261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4333736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4335230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4336654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4338083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4339498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4341615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4343121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4344552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4345982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4347405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4348844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4350248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4351671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4353086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4354614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4356069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4357497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4358909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4360389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4361835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4363256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4364700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4366124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4367545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4368958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4370432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4371854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4373320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4374797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4376227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4377640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4379107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4380556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4381983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4383457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4384879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4386299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4387709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4389154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4390552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4392017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4393489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4394952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4396704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4398238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4399736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4401183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4402599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4404036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4405463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4406892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4408321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4409738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4411244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4412740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4414185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4415592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4417126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4418605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4420029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4421437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4422873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4424306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4425714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4427145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4428571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4430057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4431518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4432951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4434391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4435889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4437348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4438778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4440202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4441648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4443056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4444510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4445931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4447351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4448804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4450277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4451713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4453128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4454573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4456023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4457508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4458927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4460354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4461761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4463188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4464632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4466054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4467550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4469029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4470467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4471863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4473297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4474773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4476268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4477672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4479107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4480516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4481941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4483339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4484786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4486250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4487718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4489142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4490541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4491965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4493425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4494915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4496666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4498113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4499541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4500966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4502379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4503821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4505355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4506850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4508265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4509694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4511143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4512615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4514166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4515610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4517102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4518513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4519944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4521369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4522812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4524270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4525773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4527192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4528617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4530040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4531497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4532977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4534392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4535836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4537248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4538682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4540103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4541531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4542938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4544417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4545908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4547331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4548739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4550189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4551666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4553067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4554496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4555923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4557360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4558760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4560179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4561587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4563061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4564512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4565940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4567350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4568818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4570281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4571684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4573124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4574568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4575990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4577394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4578833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4580253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4581719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4583185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4584617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4586030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4587490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4588955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4590369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4591806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4593199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4594633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4596395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 36%] 2024-08-07T18:08:32.4597857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4599250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4600760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4602254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4603693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4605106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4606594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4608077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4609497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4610923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4612336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4613772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4615218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4616648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4618101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4619573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4621038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4622446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4623845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4625296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4626751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4628215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4629619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4631035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4632474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4633873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4635321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4636787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4638258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4639711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4641135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4642545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4643981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4645500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4646975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4648385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4649811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4651227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4652632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4654057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4655488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4656939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4658382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4659804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4661212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4662625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4664087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4665601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4667008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4668404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4669827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4671240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4672681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4674082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4675601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4677074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4678513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4679918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4681341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4682798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4684283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4685710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4687141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4688546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4689951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4691366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4692764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4694189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4696062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4697580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4698986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4700418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4701890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4703377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4704774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4706232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4707648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4709043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4710464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4711875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4713312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4714766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4716268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4717728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4719169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4720621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4722099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4723505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4724933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4726342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4727756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4729165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4730583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4731995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4733437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4734930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4736362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4737777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4739233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4740718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4742133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4743554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4744957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4746411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4747831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4749241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4750668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4752126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4753618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4755023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4756467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4757918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4759463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4760852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4762267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4763679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4765113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4766524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4767947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4769358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4770814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4772283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4773689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4775111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4776534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4777983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4779427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4780856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4782273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4783689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4785092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4786520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4787932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4789365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4790841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4792238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4793657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4795395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4796920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4798409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4799835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4801276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4802703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4804100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4805542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4806936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4808425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4809906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4811319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4812740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4814146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4815641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4817156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4818587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4820002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4821443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4822862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4824286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4825702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4827125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4828586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4830037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4831456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4832867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4834349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4835855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4837274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4838686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4840117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4841510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4842927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4844336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4845847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4847292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4848767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4850188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4851608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4853074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4854534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4855987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4857451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4858886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4860296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4861724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4863135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4864546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4866065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4867547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4868965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4870360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4871828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4873303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4874732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4876213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4877644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4879054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4880489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4881894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4883319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4884780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4886293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4887695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4889149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4890610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4892079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4893495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4894898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4896731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4898151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4899558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4900947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4902367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4903891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4905375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4906798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4908221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4909632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4911084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4912573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4913994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4915435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4916864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4918349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4919776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4921219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4922664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4924144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4925564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4927033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4928438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4929907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4931358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4932765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4934182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4935578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4937030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4938452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4939868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4941311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4942790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4944200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4945610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4947034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4948524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4950048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4951441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4952862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4954302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4955731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4957152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4958595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4960048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4961560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4962961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4964377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4965777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4967272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4968704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4970098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4971522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4972935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4974346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4975738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4977179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4978588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4980036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4981483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4982910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4984321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4985777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4987240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4988670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4990093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4991488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4992902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4994308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4995991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4997427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.4998931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5000402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5001825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5003211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5004678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5006160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5007609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5009004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5010402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5011821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5013235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5014652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5016068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5017611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5019097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5020512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5021910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5023384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5024835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5026293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5027707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5029140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5030548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5031942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5033411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5034809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5036274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5037724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5039134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5040534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5041961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5043388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5044848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5046278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5047722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5049110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5050504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5051924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5053333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5054810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5056254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5057688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5059098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5060504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5061984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5063456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5064862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5066278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5067717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5069116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5070538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5071926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5073332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5074774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5076248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5077647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5079056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5080491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5081960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5083337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5084746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5086152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5087594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5088988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5090382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5091810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5093281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5094748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5096460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5097910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5099406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5100906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5102310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5103732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5105133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5106532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5107926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5109327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5110755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5112202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5113686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5115092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5116518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5118017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5119489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5120912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5122363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5123773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5125205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5126653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5128097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5129526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5130990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5132485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5133937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5135375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5136846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5138354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5139769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5141185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5142593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5144028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5145450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5146901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5148322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5149785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5151277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5152683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5154115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5155540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5157043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5158512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5159940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5161398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5162845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5164255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5165693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5167144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5168631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5170088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5171507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5172939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5174364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5175826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5177303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5178746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5180172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5181599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5183008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5184460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5185882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5187371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5188829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5190274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5191716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5193114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5194580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5196351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5197844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5199250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5200706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5202120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5203567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5204973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5206486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5207976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5209406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5210824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5212223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5213714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5215192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5216602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5218084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5219526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5220938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5222384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5223786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5225271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5226744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5228188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5229603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5231030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5232545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5234007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5235438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5236871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5238339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5239746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5241181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5242593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5244028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5245467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5246936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5248374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5249814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 37%] 2024-08-07T18:08:32.5251266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5252725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5254156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5255581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5257007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5258428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5259873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5261310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5262737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5264191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5265682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5267107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5268540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5270002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5271471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5272919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5274340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5275779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5277200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5278639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5280042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5281466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5282927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5284422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5285833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5287270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5288735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5290225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5291628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5293040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5294482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5296216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5297668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5299079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5300521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5302026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5303537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5304948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5306384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5307889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5309379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5310783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5312215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5313624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5315021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5316436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5317916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5319357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5320808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5322276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5323690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5325122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5326563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5328055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5329478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5330924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5332329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5333799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5335240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5336677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5338129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5339589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5341081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5342514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5343948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5345412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5346891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5348329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5349749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5351160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5352596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5354024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5355435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5356862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5358342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5359877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5361288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5362717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5364135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5365613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5367057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5368503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5369925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5371364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5372771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5374192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5375688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5377167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5378692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5380106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5381531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5382947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5384407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5385856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5387286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5388735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5390155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5391564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5393003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5394421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5396160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5397674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5399114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5400557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5401959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5403444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5404926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5406369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5407784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5409231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5410648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5412095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5413503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5414956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5416428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5417878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5419312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5420713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5422192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5423654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5425069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5426484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5427913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5429338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5430749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5432149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5433579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5435035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5436491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5437919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5439342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5440827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5442295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5443724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5445154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5446597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5448019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5449452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5450857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5452295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5453731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5455184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5456610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5458033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5459492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5460937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5462367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5463789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5465207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5466604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5468039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5469461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5470873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5472315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5473797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5475214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5476617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5478097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5479553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5480990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5482391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5483814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5485216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5486638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5488042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5489462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5490904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5492386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5493775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5495430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5496918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5498427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5499906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5501308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5502744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5504161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5505574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5506986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5508443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5509918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5511408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5512822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5514237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5515679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5517135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5518673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5520090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5521523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5522972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5524395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5525809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5527241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5528697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5530175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5531584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5533020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5534419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5535861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5537338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5538772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5540192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5541592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5543029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5544447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5545866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5547312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5548814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5550234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5551659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5553070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5554525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5556005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5557398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5558846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5560259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5561686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5563086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5564505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5565947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5567431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5568845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5570261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5571668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5573141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5574577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5575984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5577428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5578871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5580284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5581691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5583123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5584540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5586001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5587469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5588916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5590322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5591773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5593220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5594627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5596309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5597721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5599162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5600567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5601995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5603388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5604868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5606344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5607775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5609188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5610656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5612129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5613553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5614946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5616343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5617811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5619250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5620664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5622062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5623520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5624973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5626382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5627771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5629250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5630701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5632112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5633511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5634918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5636341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5637732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5639168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5640586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5642060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5643509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5644938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5646347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5647790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5649247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5650714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5652127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5653572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5654976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5656379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5657810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5659232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5660746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5662200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5663630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5665044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5666458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5667894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5669377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5670783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5672207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5673609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5675019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5676456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5677863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5679328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5680790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5682226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5683628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5685055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5686514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5688006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5689408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5690838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5692240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5693663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5695294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5696713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5698214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5699698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5701112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5702512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5703940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5705407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5706882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5708292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5709723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5711134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5712547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5713940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5715348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5716781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5718270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5719741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5721151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5722581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5724017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5725547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5726944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5728385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5729774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5731182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5732582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5734006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5735399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5736846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5738330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5741044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5743714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5746378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5749075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5751735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5754415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5757085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5759758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5763418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5766087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5768735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5771468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5774202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5776882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5779552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5782951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5785702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5788357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5790991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5793627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5796675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5799307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5803081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5805811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5808609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5811335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5813963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5816628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5819388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5823097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5825775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5828423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5831080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5833705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5836379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5839024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5842123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5845239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5847933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5850556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5853194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5855847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5858524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5861204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5863857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5866479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5869101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5871761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5874441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5877080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5879748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5882435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5885068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5887706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5890335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5893051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5896082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5898777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5901400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5904042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5906694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5909337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5911992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5914713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5917437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5920067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5922685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5925332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5928048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5930766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5933383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5936024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5938667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5941304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5943926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5946559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5949211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5951909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5954592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5957233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5959915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5962605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5965268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5967932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5970600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5973241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5975871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5978540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5981176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5983784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5986490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5989189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5991836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5994470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.5997460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6000159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6002809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6005476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6008093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6010731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6013393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6016010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6018738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6021460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6024205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6026841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6029464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6032129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6034829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6037461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6040104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6042757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6045386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6048000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6050612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 38%] 2024-08-07T18:08:32.6053236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6055920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6058601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6061233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6063830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6066466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6069125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6071790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6074443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6077101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6079740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6082366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6085015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6087670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6090354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6093041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6095936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6098660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6101334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6104047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6106754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6109397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6112035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6114635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6117291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6119991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6122639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6125260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6127985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6130682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6133316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6135995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6138692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6141453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6144084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6146721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6149363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6152019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6154703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6157316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6159957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6162673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6165358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6167989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6170614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6173290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6175957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6178575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6181215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6183841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6186460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6189091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6191719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6194357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6197314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6199998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6202625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6205260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6207927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6210629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6213251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6215924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6218617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6221254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6223883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6226537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6229188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6231857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6234540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6237181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6239777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6242398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6245070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6247788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6250438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6253058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6255663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6258302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6260942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6263556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6266240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6268927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6271530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6274146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6276787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6279504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6282184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6284833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6287447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6290089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6292711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6295576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6298224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6300843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6303551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6306226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6308838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6311486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6314163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6316833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6319504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6322136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6324741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6327335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6329956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6332606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6335250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6337925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6340634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6343290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6345942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6348616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6351296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6353932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6356610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6359264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6361917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6364568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6367196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6369817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6372518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6375210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6377857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6380485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6383144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6385824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6388504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6391121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6393759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6396747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6399414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6402027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6404665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6407410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6410123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6412755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6415416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6418094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6420802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6423484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6426127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6428767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6431388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6434061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6436675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6439318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6441991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6444657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6447310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6449945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6452568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6455226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6457937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6460591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6463218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6465824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6468429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6471088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6473714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6476357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6479025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6481722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6484337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6486976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6489663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6492332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6494966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6497906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6500513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6503153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6505774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6508410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6511037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6513757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6516449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6519103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6521765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6524478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6527229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6529866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6532488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6535144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6537810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6540453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6543102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6545770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6548453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6551137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6553772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6556446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6559071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6561721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6564416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6567044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6569727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6572341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6575064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6577705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6580352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6583047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6585729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6588360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6591004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6593620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6596612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6599320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6601968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6604594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6607220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6609871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6612501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6615099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6617781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6620549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6623252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6625864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6628506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6631176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6633855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6636459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6639107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6641767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6644403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6646984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6649608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6652243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6654917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6657598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6660223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6662837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6665506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6668167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6670809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6673445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6676090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6678704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6681324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6683949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6686554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6689191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6691891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6694523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6697458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6700070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6703522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6706286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6708895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6711508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6714160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6716793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6719463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6722108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6724806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6727529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6730133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6732792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6735417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6738108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6740831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6743450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6746057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6748684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6751325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6753933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6756547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6759182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6761820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6764479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6767119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6769771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6772435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6775169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6777760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6780389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6783003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6785602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6788237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6790877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6793474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6796435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6799137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6801801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6804414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6807101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6809757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6812362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6814964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6817550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6820203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6822817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6825430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6828051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6830692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6833364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6835997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6838593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6841189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6843881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6846550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6849163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6851796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6854430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6857040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6859666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6862340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6865027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6867705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6870334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6872955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6875553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6878213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6880879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6883500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6886139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6888756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6891341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6893964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6896879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6899519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6902194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6904913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6907560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6910207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6912918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6915637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6926351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6929246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6931910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6934557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6937285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6939939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6942580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6945354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6948093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6950710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6953335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6956049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6958752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6961395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6964052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6966676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6969328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6971970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6974608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6977298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6980001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6982734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6985376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6988027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6990746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6993441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6996467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.6999162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7001845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7004501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7007145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7009822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7012466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7015200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7017931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7020607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7023259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7025900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7028587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7031294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7033923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7036590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7039247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7041893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7044549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7047193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7049846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7052565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7055223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7057890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7060529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7063244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7065935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7068570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7071210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7073844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7076497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7079127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7081741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7084392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7087090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7088494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7089890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7091326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7092775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7094230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7095951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7097395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7098837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7100234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7101683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7103096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7104605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7106073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7107502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7108918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7110354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7111829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7113319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7114720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7116134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7117546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7118997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7120424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7121859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7123276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7124720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7126203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7127616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7129032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7130476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7131973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7133391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7134811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7136220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7137634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7139073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7140474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7141917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7143389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7144883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7146289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7147721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7149168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7150652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7152056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7153481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7154893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7156305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7157715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7159123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7160550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7162007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7163469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7164874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7166296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7167753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7169212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7170622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7172054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7173461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7174855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7176274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7177737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7179170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7180616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7182142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7183544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 39%] 2024-08-07T18:08:32.7184965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7186398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7187851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7189247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7190680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7192079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7193494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7194894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7196589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7198004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7199466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7200986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7202405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7203821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7205224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7206711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7208207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7209631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7211052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7212491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7213905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7215306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7216728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7218217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7219699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7221107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7222525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7223932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7225409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7226850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7228265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7229676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7231126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7232521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7233947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7235355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7236806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7238270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7239666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7241116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7242529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7243990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7245433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7246855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7248270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7249682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7251100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7252519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7253917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7255298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7256747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7258201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7259626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7261037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7262494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7263943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7265366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7266760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7268179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7269581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7271026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7272419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7273834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7275281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7276750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7278164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7279562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7281062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7282522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7283932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7285332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7286744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7288148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7289550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7290967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7292384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7293836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7295539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7297019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7298435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7299939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7301417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7302833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7304242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7305671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7307067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7308484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7309893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7311352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7312800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7314271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7315699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7317114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7318568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7320019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7321508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7322911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7324316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7325706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7327122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7328532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7329940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7331398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7332871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7334281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7335667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7337083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7338521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7339986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7341398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7342816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7344212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7345636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7347031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7348437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7349878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7351383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7352770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7354166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7355570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7357015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7358455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7359838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7361285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7362688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7364087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7365472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7366884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7368285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7369726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7371189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7372597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7374015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7375449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7376908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7378311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7379738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7381158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7382573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7383975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7385401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7386788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7388235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7389681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7391126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7392514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7393952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7395677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7397104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7398511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7399905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7401349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7402752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7404155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7405545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7407040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7408521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7409968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7411423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7412895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7414387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7415776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7417192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7418644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7420097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7421516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7422940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7424342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7425821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7427266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7428679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7430083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7431531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7432964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7434410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7435834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7437257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7438669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7440062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7441515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7442929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7444381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7445828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7447250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7448664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7450077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7451535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7452996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7454428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7455831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7457248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7458648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7460074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7461477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7462928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7464382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7465807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7467198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7468610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7470059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7471528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7472934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7474335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7475753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7477154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7478561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7479946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7481382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7482937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7484401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7485796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7487212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7488659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7490132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7491540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7492939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7494353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7495980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7497398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7498791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7500212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7501684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7503164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7504558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7505975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7507413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7508881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7510280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7511707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7513117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7514516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7515942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7517358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7518812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7520256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7521752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7523167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7524580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7525975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7527429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7528877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7530265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7531691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7533091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7534515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7535910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7537321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7538764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7540241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7541645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7543057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7544468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7545937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7547375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7548799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7550216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7551649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7553063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7554472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7555903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7557357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7558821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7560224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7561643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7563045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7564508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7565942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7567364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7568774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7570167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7571582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7572983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7574404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7575843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7577308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7578707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7580132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7581520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7582974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7584421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7585847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7587244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7588658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7590071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7591481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7592894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7594293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7596016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7597548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7598954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7600336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7601807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7603273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7604677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7606066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7607478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7608877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7610256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7611670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7613077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7614549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7616026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7617441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7618888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7620390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7621838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7623261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7624674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7626105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7627502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7628901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7630336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7631745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7633182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7634638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7636058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7637466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7638879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7640319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7641789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7643194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7644610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7646009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7647429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7648837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7650232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7651682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7653137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7654560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7655949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7657361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7658804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7660286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7661676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7663094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7664490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7665913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7667299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7668688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7670146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7671600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7673000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7674389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7675805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7677247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7678697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7680094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7681519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7682927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7684337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7685731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7687134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7688563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7690007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7691477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7692888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7694321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7696055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7697555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7698950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7700386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7701768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7703167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7704557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7705976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7707360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7708813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7710305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7711707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7713113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7714545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7716014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7717414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7718860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7720262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7721685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7723089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7724501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7725894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7727356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7728841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7730247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7731666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7733060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7734519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7735943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7737345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7738742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7740171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7741554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7742963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7744363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7745809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7747260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7748646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7750064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7751464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7752907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7754342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7755756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7757160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7758613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7760011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7761493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7762895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7764365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7765817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7767204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7768611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7769988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7771432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7772922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7774331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7775714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7777117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7778506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7779925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7781300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7782698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7784210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7785664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7787108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7788510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7790049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7791516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7792927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7794346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7796044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7797471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7798883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7800316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7801733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7803209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7804678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7806084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7807485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7808970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7810435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7811842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7813240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7814673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7816071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7817482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7818934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7820384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7821824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7823279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7824705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7826117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7827534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7828969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7830452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7831867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7833285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 40%] 2024-08-07T18:08:32.7834706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7836130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7837540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7838942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7841098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7842614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7844018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7845409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7846817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7848279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7849754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7851143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7852562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7853964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7855391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7856783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7858192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7859597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7861073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7862516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7863917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7865338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7866788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7868245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7869645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7871077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7872481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7873880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7875267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7876684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7878088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7879538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7880984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7882383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7883804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7885233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7886681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7888102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7889528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7890937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7892350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7893749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7895435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7896887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7898382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7899850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7901266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7902671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7904059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7905514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7906968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7908365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7909750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7911176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7912576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7913979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7915365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7916822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7918351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7919759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7921141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7922548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7924019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7925460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7926873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7928281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7929739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7931144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7932571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7933973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7935444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7936890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7938304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7939727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7941169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7942629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7944069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7945488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7946899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7948308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7949703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7951119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7952517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7953919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7955344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7956872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7958279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7959695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7961211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7962671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7964099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7965494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7966904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7968310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7969738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7971145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7972565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7974003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7975472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7976854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7978253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7979698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7981144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7982548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7983939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7985379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7986785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7988194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7989593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7991014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7992470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7993926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7995578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7997012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7998419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.7999881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8001363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8002766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8004188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8005576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8006989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8008383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8009794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8011232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8012748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8014133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8015552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8016931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8018404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8019865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8021266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8022684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8024070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8025497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8026907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8028310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8029700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8031166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8032648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8034064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8035466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8036918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8038397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8039790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8041299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8042718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8044140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8045546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8046979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8048385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8049854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8051295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8052743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8054143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8055620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8057076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8058460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8059879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8061282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8062703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8064092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8065513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8066919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8068370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8069813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8071226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8072641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8074046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8075521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8076958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8078371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8079783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8081193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8082599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8084022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8085407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8086849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8088292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8089731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8091113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8092529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8093964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8095745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8097165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8098562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8099984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8101385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8102819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8104211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8105760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8107230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8108636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8110027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8111433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8112897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8114335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8115733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8117133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8118595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8119983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8121388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8122794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8124207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8125625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8127082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8128476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8129895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8131336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8132794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8134214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8135620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8137018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8138406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8139821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8141230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8142665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8144101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8145560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8146953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8148371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8149752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8151214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8152664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8154046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8155455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8156875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8158292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8159675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8161080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8162512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8163982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8165361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8166760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8168152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8169619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8171048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8172446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8173861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8175267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8176670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8178063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8179472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8180852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8182294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8183720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8185129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8186525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8187965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8189389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8190777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8192210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8193589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8194981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8196732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8198169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8199553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8201028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8202531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8203957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8205368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8206840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8208300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8209704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8211111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8212547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8213957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8215351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8216755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8218187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8219647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8221095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8222509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8223888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8225295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8226730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8228154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8229556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8231015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8232497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8233920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8235371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8236812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8238317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8239789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8241243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8242695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8244162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8245634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8247134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8248562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8250003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8251446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8252865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8254317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8255756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8257237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8258714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8260166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8261614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8263060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8264523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8266022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8267464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8268912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8270342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8271808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8273278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8274703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8276203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8277686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8279146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8280576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8282052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8283516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8285011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8286427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8287874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8289308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8290768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8292220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8293657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8295465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8296997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8298434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8299863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8301314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8302822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8304321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8305743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8307198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8308637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8310079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8311511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8312976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8314453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8315927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8317376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8318835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8320334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8321809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8323238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8324661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8326113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8327527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8328965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8330391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8331850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8333304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8334795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8336231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8337672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8339117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8340582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8342116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8343563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8345015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8346443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8347901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8349348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8350791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8352282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8353778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8355214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8356651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8358115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8359588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8361044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8362489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8363931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8365356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8366813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8368234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8369676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8371146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8372730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8374155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8375599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8377084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8378574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8380016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8381446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8382917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8384361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8385804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8387240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8388689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8390158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8391643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8393070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8394521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8396293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8397826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8399254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8400689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8402151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8403571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8405013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8406446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8407899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8409374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8410893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8412349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8413800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8415274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8416767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8418236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8419717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8421140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8422612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8424039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8425471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8426911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8428370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8429871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8431307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8432766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8434230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8435720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8437148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8438582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8439995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8441445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8442910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8444337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8445784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8447269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8448783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8450214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8451665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8453174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8454684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8456108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8457562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8458987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8460439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8461856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8463313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8464750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8466224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8467713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8469139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8470591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8472098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8473582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8475005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8476455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8477889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8479329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8480749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8482216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8483653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8485133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8486626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 41%] 2024-08-07T18:08:32.8488057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8489513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8490979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8492484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8493912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8495609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8497080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8498515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8499935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8501390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8502830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8504333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8505830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8507280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8508687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8510163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8511698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8513158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8514598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8516015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8517459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8518935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8520385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8521803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8523319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8524804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8526245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8527664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8529148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8530620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8532030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8533485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8534938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8536379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8537790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8539235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8540664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8542152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8543635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8545070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8546498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8547999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8549462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8550927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8552364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8553799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8555238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8556666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8558125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8559561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8561062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8562547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8563994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8565422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8566897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8568356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8569798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8571235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8572672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8574107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8575533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8576986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8578404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8579883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8581358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8582828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8584245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8585728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8587201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8588654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8590079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8591515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8592961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8594392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8596087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8597524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8599026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8600522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8601952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8603381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8604879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8606367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8607788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8609199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8610634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8612058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8613506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8614917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8616345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8617833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8619343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8620785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8622216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8623740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8625219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8626659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8628090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8629540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8630959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8632399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8633835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8635286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8636734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8638203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8639647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8641078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8642566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8644029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8645469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8646901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8648330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8649743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8651190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8652638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8654081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8655540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8657079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8658537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8659959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8661454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8662954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8664409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8665830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8667271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8668690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8670133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8671547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8673006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8674506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8676012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8677432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8678853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8680342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8681824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8683278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8684702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8686151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8687583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8689019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8690444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8691900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8693400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8694894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8696568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8698024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8699547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8701034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8702477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8703923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8705366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8706778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8708210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8709649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8711103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8712565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8714080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8715504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8716957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8718453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8719939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8721361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8722789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8724215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8725628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8727073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8728504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8729942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8731402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8732907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8734340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8735772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8737229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8738712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8740134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8741543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8742980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8744401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8745841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8747252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8748686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8750149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8751652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8753080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8754517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8755992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8757494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8758910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8760351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8761779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8763241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8764680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8766102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8767552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8769030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8770533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8771966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8773431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8774903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8776378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8777791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8779239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8780667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8782095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8783538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8784969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8786418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8787875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8789362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8790796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8792248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8794374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8796209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8797665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8799127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8800548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8801990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8803469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8804952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8806375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8807873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8809385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8810817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8812249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8813722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8815236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8816670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8818106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8819561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8821010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8822439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8823890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8825314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8826784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8828286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8829696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8831131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8832603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8834113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8835533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8836975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8838396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8839843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8841256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8842695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8844132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8845623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8847083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8848491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8849929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8851395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8852865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8854303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8855749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8857174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8858602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8860060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8861515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8862951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8864510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8865982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8867432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8868871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8870337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8871825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8873254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8874728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8876146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8877586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8879009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8880459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8881868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8883341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8884839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8886289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8887708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8889191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8890660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8892092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8893527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8894962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8896690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8898116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8899548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8900964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8902478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8903980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8905429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8906851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8908363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8909872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8911285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8912725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8914165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8915610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8917014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8918485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8919908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8921392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8922853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8924295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8925713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8927211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8928667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8930098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8931527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8932956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8934406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8935828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8937280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8938713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8940212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8941689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8943133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8944589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8946073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8947531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8948963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8950380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8951806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8953213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8954659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8956108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8957522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8959000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8960468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8961917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8963319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8964774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8966238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8967786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8969195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8970634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8972063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8973495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8974954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8976376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8977862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8979343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8980780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8982195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8983631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8985126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8986594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8988005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8989444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8990870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8992297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8993712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8995399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8996929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8998420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.8999849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9001267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9002708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9004177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9005687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9007101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9008631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9010042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9011473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9012897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9014338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9015799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9017269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9018740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9020161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9021576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9023018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9024503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9025923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9027349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9028752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9030186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9031607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9033034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9034493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9035968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9037410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9038819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9040253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9041723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9043218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9044648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9046102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9047528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9048981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9050398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9051838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9053291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9054813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9056216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9057626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9059066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9060540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9062013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9063427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9064901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9066328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9067754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9069177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9070641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9072122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9073615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9075070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9076532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9077978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9079448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9080946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9082384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9083853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9085305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9086756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9088190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9089645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9091107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9092595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9094022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9095756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9097229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9098756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9100252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9101693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9103136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9104561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9106031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9107479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9108921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9110404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9111933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9113382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9114836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9116267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9117760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9119287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9120740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9122173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9123599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9125075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9126495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9127929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9129415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9130922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9132341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9133784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9135239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9136734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9138202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9139643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9141078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9142513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9143947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9145400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9146853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 42%] 2024-08-07T18:08:32.9148328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9149822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9151249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9152690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9154126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9155629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9157100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9158541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9159971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9161401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9162817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9164244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9165714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9167174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9168662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9170091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9171541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9172957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9174433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9175936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9177397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9178830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9180274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9181712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9183182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9184627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9186099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9187606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9189058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9190499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9191934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9193418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9194931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9196650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9198081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9199536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9200974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9202417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9203836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9205381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9206893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9208318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9209756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9211196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9212705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9214189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9215669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9217112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9218614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9220038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9221493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9222930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9224429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9225929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9227386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9228814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9230248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9231725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9233192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9234640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9236102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9237545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9238968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9240421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9241856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9243339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9244811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9246277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9247711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9249192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9250659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9252082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9253539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9254961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9256403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9257831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9259291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9260714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9262194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9263663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9265121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9266530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9268060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9269544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9270975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9272408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9273819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9275284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9276719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9278154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9279574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9281074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9282563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9283999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9285445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9286949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9288436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9289883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9291320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9292763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9294224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9295930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9297405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9298840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9300361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9301846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9303287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9304715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9306253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9307736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9309178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9310619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9312078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9313500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9314923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9316394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9317829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9319345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9320826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9322281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9323715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9325201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9326690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9328139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9329576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9331012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9332435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9333875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9335315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9336730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9338201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9339698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9341148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9342567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9344062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9345549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9346998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9348416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9349848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9351274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9352725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9354197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9355659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9357128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9358676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9360115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9361541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9363038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9364517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9365977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9367401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9368845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9370270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9371697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9373112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9374625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9376118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9377599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9379016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9380499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9381996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9383455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9384887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9386339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9387798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9389215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9390651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9392087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9393546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9395300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9396930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9398375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9399805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9401305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9402789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9404232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9405662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9407109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9408524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9409969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9411408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9412846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9414326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9415830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9417264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9418739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9420206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9421677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9423123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9424537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9425978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9427404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9428855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9430273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9431711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9433175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9434693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9436130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9437577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9439039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9440531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9441940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9443352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9444792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9446240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9447668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9449084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9450529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9451998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9453481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9454893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9456368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9457848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9459335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9460752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9462204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9463642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9465062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9466532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9467970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9469425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9470886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9472381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9473807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9475252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9476730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9478220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9479644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9481094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9482505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9483925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9485365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9486820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9488251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9489712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9491209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9492643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9494077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9496883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9499135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9500578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9502018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9503438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9504891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9506328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9507765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9509214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9510699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9512206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9513621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9515055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9516578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9518092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9519505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9520942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9522357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9523804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9525213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9526661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9528112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9529590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9531134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9532560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9534010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9535491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9536977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9538424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9539879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9541311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9542746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9544163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9545609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9547036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9548529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9549993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9551427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9552885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9554343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9555827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9557291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9558769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9560179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9561616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9563039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9564496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9565908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9567380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9568900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9570336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9571766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9573229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9574727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9576156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9577593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9579034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9580469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9581893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9583324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9584734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9586210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9587686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9589130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9590543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9592007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9593506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9594913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9597095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9598547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9600009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9601423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9602867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9604300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9605865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9607363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9608819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9610246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9611764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9613248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9614673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9616204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9617636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9619084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9620496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9621946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9623374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9624846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9626321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9627761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9629212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9630691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9632156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9633598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9635030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9636452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9637890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9639346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9640804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9642225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9643705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9645194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9646654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9648071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9649580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9651048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9652495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9653905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9655321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9656767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9658189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9659649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9661068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9662568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9664054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9665486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9666904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9668390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9669932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9671354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9672769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9674215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9675646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9677062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9678504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9679947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9681442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9682919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9684358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9685774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9687215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9688671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9690152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9691572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9693015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9694426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9696184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9697629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9699060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9700570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9702067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9703523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9704962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9706403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9707881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9709411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9710852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9712305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9713763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9715209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9716693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9718138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9719628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9721109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9722554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9723962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9725396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9726864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9728379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9729822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9731267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9732688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9734136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9735552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9736983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9738447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9739946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9741377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9742787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9744233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9745701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9747178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9748594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9750067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9751495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9752925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9760519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9762109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9763654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9765157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9766596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9768040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9769469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9770940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9772429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9773869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9775306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9776740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9778173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9779604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9781056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9782513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9783996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9785430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9786888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9788307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9789788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9791263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9792712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9794133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9795871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9797315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9798744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9800172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9801684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9803205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9804634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9806076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9807499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9809009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9810499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9811925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9813343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9814788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9816281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9817708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 43%] 2024-08-07T18:08:32.9819145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9820612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9822113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9823533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9824980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9826431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9827928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9829390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9830826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9832248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9833693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9835096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9836561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9837987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9839450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9840936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9842353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9843782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9845206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9846709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9848234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9849671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9851098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9852527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9853938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9855388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9856844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9858318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9859778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9861200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9862648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9864060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9866178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9867713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9869193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9870600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9872027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9873445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9874885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9876293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9877786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9879277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9880701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9882128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9883542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9885037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9886516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9887962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9889389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9890839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9892272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9893712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9895442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9897052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9898572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9900006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9901434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9902854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9904377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9905853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9907298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9908731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9910177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9911583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9913021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9914437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9915965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9917437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9918872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9920299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9921744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9923202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9924666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9926109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9927566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9929000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9930419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9931868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9933292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9934729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9936186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9937695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9939120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9940540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9941996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9943461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9944903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9946317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9947768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9949187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9950632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9952038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9953471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9954948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9956451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9957884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9959316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9960787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9962286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9963694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9965117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9966564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9968022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9969452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9970870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9972310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9973770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9975244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9976654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9978115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9979636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9981111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9982521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9983944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9985387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9986794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9988248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9989668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9991109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9992562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9994050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9995728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9997189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:32.9998691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0000186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0001606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0003051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0004456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0005870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0007296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0008736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0010153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0011612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0013116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0014541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0015997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0017453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0018965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0020382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0021806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0023207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0024637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0026083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0027499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0028958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0030431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0031932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0033352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0034790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0036220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0037707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0039171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0040606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0042028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0043471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0044876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0046290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0047757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0049227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0050704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0052119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0053557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0054980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0056452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0057919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0059362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0060795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0062228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0063641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0065089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0066523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0067990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0069489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0070912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0072362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0073774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0075253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0076776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0078239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0079647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0081072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0082490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0083939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0085350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0086801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0088316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0089748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0091169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0092579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0094070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0095797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0097239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0098674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0100115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0101535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0102968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0104379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0105889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0107378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0108811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0110246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0111659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0113147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0114608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0116071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0117496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0118960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0120360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0121784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0123198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0124674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0126118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0127526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0128997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0130428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0131901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0133361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0134803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0136241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0137677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0139121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0140569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0141998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0143436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0144904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0146395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0147822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0149246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0150722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0152192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0153638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0155050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0156485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0157902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0159374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0160782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0162210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0163688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0165199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0166602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0168011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0169520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0170993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0172412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0173830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0175273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0176691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0178143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0179564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0180992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0182447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0183916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0185318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0186750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0188215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0189674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0191092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0192509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0193944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0195633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0197077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0198496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0199960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0201430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0202933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0204356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0205797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0207265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0208753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0210188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0211614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0213036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0214452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0215926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0217351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0218792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0220240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0221726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0223144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0224565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0225975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0227459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0228940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0230345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0231770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0233184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0234619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0236026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0237454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0241680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0245064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0246488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0247904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0249368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0250838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0252239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0253688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0255104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0256527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0257922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0259344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0260750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0262252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0263876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0265344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0266790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0268207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0269634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0271034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0272467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0273909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0275326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0276731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0278175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0279585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0281044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0282611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0284138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0285551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0286953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0288386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0289783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0291211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0292607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0294042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0295770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0297233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0298628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0300043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0301582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0303147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0304560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0305965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0307396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0308809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0310224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0311632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0313075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0314521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0315995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0317409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0318844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0320339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0321847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0323248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0324702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0326118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0327505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0328921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0330332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0331763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0333144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0334589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0336007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0337437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0338908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0340408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0341832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0343286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0344725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0346143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0347586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0349024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0350462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0351884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0353334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0354796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0356226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0357727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0359254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0360675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0362100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0363521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0364987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0366419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0367842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0369280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0370702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0372149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0373568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0375013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0376519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0378068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0379476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0380911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0382349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0383808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0385231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0386674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0388109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0389540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0390979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0392403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0393861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0395653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0397216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0398623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0400075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0401511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0402939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0404380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0405828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0407262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0408691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0410110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0411538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0412971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0414484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0416047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0417477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0418933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0420352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0421782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0423203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0424681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0426091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0427526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0428943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0430370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0431788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0433270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0434821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0436240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0437666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0439088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0440519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0441937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0443365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0444796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0446239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0447670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0449110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0450528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0452050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0453589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0455020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0456467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0457897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0459350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0460764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0462213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0463633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0465074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0466483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0467919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0469335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0470857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0472360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0473781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0475232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0476666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0478093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0479506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 44%] 2024-08-07T18:08:33.0480955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0482393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0483836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0485254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0486712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0488143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0489655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0491163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0492610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0494071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0495823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0497283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0498712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0500161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0501570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0502998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0504450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0505907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0507313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0508865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0510404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0511845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0513255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0514702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0516175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0517605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0519037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0520453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0521896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0523326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0524786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0526197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0527738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0529258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0530689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0532106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0533546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0534992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0536397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0537832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0539254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0540882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0542297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0543740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0545169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0546693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0548189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0549614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0551042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0552501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0553917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0555369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0556801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0558239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0559673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0561098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0562555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0563994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0565539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0567059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0568495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0569920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0571352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0572763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0574218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0575659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0577074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0578503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0579928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0581381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0582791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0584363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0585890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0587334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0588740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0590172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0591596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0593036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0594474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0596188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0597628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0599058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0600549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0601963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0603510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0605061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0606478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0607883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0609318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0610742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0612161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0613570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0615038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0616497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0617928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0619345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0620763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0622286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0623780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0625288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0626711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0628160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0629574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0630999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0632422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0633871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0635297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0636730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0638151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0639585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0641060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0642548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0643982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0645423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0646853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0648267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0649704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0651131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0652565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0653972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0655418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0656849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0658272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0659755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0661279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0662724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0664139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0665580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0667009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0668456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0669868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0671307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0672715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0674166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0675574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0676998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0678489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0680016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0681415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0682822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0684289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0685715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0687136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0688548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0689998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0691415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0692840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0694283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0695981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0697490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0699033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0700529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0701970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0703399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0704831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0706260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0707667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0709103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0710494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0711919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0713334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0714797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0716277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0717791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0719247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0720682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0722084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0723499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0724957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0726386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0727819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0729227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0730670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0732107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0733533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0735013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0736564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0738040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0739466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0740886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0742329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0743746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0745170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0746606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0748021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0749462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0750874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0752294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0753747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0755292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0756746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0758171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0759595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0761045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0762451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0763885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0765321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0766752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0768188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0769618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0771080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0772559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0774089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0775565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0776994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0778417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0779846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0781251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0782693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0784139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0785546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0786979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0788407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0789857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0791308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0792824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0794313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0796047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0797478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0798931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0800352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0801801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0803219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0804674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0806099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0807541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0810266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0813051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0815889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0818647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0821329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0824019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0826677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0829612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0833036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0835708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0838371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0841071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0843725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0846403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0849062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0853648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0856478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0859139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0861811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0864498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0867173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0870457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0874000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0876756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0879430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0882106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0884770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0887433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0890797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0894312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0897320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0899986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0902675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0905335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0908235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0911147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0914199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0916881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0919555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0922260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0924939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0927768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0930644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0933327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0936019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0938688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0941371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0944060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0946740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0949402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0952129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0954806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0957483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0960134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0962881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0965651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0968288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0970939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0973617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0976353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0979043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0981707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0984355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0987036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0989713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0992374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0995371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.0998234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1001017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1003683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1006385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1009088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1011762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1014456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1017175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1019830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1022483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1025155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1027827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1030503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1033288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1036016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1038675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1041352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1044013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1046663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1049328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1052036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1054692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1057338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1060021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1062705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1065357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1068071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1070866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1073539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1076209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1078892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1081548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1084212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1086845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1089510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1092158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1094829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1098439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1101106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1103955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1106774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1109447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1112091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1114767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1117507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1120163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1122848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1125538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1128250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1130936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1133639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1136296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1139075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1141849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1144515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1147206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1149892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1152572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1155208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1157898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1160599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1163273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1165933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1168615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1171293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1174040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1176790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1179475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1182167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1184833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1187504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1190144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1192825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1195953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1198649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1201343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1204032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1206707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1209487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1212259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1214915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1217634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1220298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1222948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1225595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1228246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1230892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1233564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1236279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1238957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1241577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1244279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1247039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1249739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1252405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1255078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1257764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1260449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1263104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1265778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1268453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1271116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1273760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1276433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1279142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1281887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1284596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1287270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1289947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1292585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1295682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1298463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1301138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1303786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1306428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1309076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1311749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1314544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1317376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1320112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1322790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1325438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1328096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1330759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1333464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1336144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1338792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1341453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1344121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1346762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1349443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1352211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1354925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1357555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1360202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1362873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1365536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1368213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1370873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1373522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1376175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1378832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1381473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1384177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1386950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1389668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1392288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1394981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1398111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1400769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1403434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1406097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1408733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1411372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1414011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1416700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1419360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1422160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1424910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1427617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1430292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 45%] 2024-08-07T18:08:33.1432946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1435573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1438223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1440884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1443568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1446231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1448917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1451596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1454264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1457007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1459746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1462432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1465100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1467757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1470407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1473060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1475722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1478350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1481017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1483687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1486343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1488981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1491706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1494468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1497577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1500262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1502969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1505640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1508278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1510948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1513623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1516342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1519087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1521777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1524452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1527301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1530131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1532793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1535470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1538174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1540832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1543458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1546127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1548820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1551468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1554121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1556839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1559487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1562241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1564988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1567651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1570334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1573007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1575679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1578349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1581037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1583729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1586382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1589131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1591805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1594490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1597722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1600521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1603194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1605849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1608548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1611220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1613913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1616625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1619298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1621960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1624642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1627287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1629954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1632726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1635495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1638157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1640817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1643470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1646145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1648794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1651463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1654134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1656838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1659479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1662142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1664808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1667580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1670333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1673010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1675660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1678341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1681015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1683680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1686385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1689068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1691725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1694351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1697647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1700334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1703148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1705944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1708681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1711376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1714043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1716758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1719444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1722140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1724827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1727469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1730130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1732806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1735454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1738163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1740909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1743626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1746297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1748953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1751625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1754292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1756960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1759611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1762277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1764995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1767671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1770327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1773018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1775777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1778531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1781214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1783962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1786655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1789322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1792006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1794685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1797828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1800521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1803176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1805844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1808512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1811282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1814089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1816793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1819526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1822202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1824857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1827503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1830171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1832820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1835462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1838158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1840834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1843484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1846222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1848977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1851644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1854305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1856987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1859634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1862275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1864923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1867564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1870209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1872897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1875575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1878190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1880941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1883701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1886348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1889006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1891668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1894346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1897459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1900157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1902836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1905525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1908203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1910854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1913574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1916444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1919240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1921895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1924570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1927263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1929890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1932548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1935216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1937893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1940550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1943202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1945852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1948517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1951270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1954044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1956696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1959363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1962003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1964642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1967303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1969991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1972652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1975296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1977938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1980642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1983294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1986024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1988783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1991435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1994076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1997176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.1999835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2002493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2005159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2007832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2010476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2013119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2015817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2018458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2021200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2024011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2026737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2029355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2032022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2034716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2037368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2040020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2042721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2045373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2048029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2050692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2053335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2056036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2058812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2061565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2064211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2066877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2069551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2072184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2074842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2077490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2080151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2082797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2085454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2088138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2090841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2093600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2096692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2099394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2102058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2104717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2107373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2110046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2112692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2115357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2118082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2120746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2123408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2126061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2128855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2131645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2134286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2136954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2139619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2142300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2144933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2147586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2150256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2152919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2155597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2158240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2160865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2163612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2166365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2167778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2169193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2170663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2172070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2173474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2174905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2176320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2177726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2179125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2180588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2182000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2183497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2184984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2186411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2187827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2189243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2190666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2192100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2193523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2194922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2196747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2198178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2199622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2201042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2202608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2204137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2205576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2206979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2208411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2209828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2211276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2212672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2214084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2215501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2216966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2218386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2219788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2221318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2222825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2224231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2225634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_1024_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2227093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2228530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2229963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2231408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2232868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2234300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2235723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2237172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2238603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2240136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2241653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2243093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2244513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2245961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2247371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2248798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2250224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2251679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2253092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2254530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2255958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2257393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2258870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2260385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2261896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2263333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2264774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2266196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2267641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2269079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2270544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2271967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2273416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2274860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2276274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2277799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2279308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2280772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2282184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2283625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2285074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2286521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2287935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2289378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2290827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2292286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2293698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2295486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2297069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2298623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2300114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2301559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2303011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2304446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2305878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2307298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2308740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2310165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2311620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2313043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2314487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2315974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2317515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2318955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2320371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2321841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2323251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2324672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2326089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2327537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2328938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2330368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2331818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2333265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2334724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2336263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2337740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2339193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2340618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2342050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2343494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2344939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2346382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2347799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2349248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2350693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2352122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2353577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2355111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2356589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2358024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2359448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2360898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2362345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2363759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2365256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2366685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2368141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2369564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2371021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2372498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2374039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2375513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2376958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2378391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2379851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2381290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2382718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2384166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2385589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2387023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2388450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2389892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2391390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2392912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2394377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2396224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2397690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2399127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2400538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2402002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2403430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2404833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2406261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2407692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2409138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2410623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2412270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2413771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2415223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2416686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2418135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2419543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2420983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2422406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2423827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2425247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2426679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2428100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 46%] 2024-08-07T18:08:33.2429556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2431078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2432554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2433975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2435383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2436838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2438267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2447938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2449569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2451048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2452502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2453960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2455386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2456950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2459235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2460757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2462177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2463608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2465070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2466482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2467917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2469356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2470831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2472250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2473695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2475115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2476614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2478126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2479622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2481065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2482497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2483935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2485352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2486801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2488247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2489681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2491131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2492580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2494002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2496029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2497665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2499169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2500583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2502026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2503439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2504852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2506292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2507713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2509167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2510622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2512127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2513858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2515368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2516925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2518423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2519824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2521249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2522701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2524148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2525560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2526984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2528424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2529855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2531290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2532717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2534187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2535694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2537161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2538560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2539995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2541440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2542863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2544275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2545720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2547136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2548535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2549967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2551400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2552884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2554379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2555877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2557294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2558741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2560159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2561609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2563035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2564488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2565899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2567316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2568759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2570185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2571678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2573179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2574667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2576087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2577514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2578928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2580364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2581811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2583242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2584646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2586080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2587530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2588934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2590362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2591890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2593420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2594829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2596859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2598336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2599796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2601209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2602671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2604121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2605557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2606959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2608439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2610072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2612005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2613558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2614984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2616451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2617903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2619313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2620876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2622845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2624310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2625736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2627155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2628601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2630025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2631807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2633723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2635895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2637361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2638806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2640213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2641644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2643068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2644868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2646859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2648960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2650765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2652180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2653701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2655661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2657667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2659644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2661542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2663225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2664926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2666854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2668731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2670507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2671960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2673404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2675149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2677147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2678690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2680111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2682117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2683580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2684991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2686410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2687818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2689257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2690680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2692112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2693541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2694954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2696957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2698499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2699922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2701351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2702844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2704237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2705667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2707097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2708546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2709955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2711393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2712848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2714279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2715853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2717364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2718791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2720209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2721642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2723076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2724506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2725936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2727370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2728767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2730241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2731666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2733103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2734653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2736165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2737595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2739002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2740435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2741849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2743308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2744715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2746151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2747560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2749000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2750414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2751841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2753340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2754832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2756241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2757638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2759073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0004s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2760491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2761907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2763332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2764760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2766169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2767590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2769003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2770436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2771939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2773454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2774858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2776285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2777736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2779143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2780578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2782012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2783467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2784877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2786314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2787735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2789171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2790664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2792189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2793607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2795394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2796870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2798287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2799718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2801139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2802588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2804005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2805441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2806868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2808291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2809832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2811387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2812838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2814272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2815692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2817172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2818618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2820040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2821495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2822935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2824374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2825783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2827208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2828706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2830231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2831632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2833101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2834517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2835964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2837364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2838813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2840255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2841671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2843116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2844542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2845975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2847477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2848999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2850403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2851840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2853288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2854715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2856115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2857545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2858960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2860356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2861770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2863210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2864642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2866084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2867595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2869052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2870487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2871885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2873322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2874743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2876184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2877594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2879006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2880451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2881889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2883320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2884777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2886306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2887782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2889208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2890629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2892083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2893513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2895444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2896943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2898377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2899828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2901284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2902754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2904498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2906080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2907557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2908983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2910400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2911842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2913252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2914675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2916142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2917585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2918987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2920425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2921835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2923320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2924849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2926305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2927725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2929137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2930555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2931947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2933420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2934840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2936252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2937653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2939085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2940499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2941943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2943472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2944939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2946369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2947775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2949209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2950621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2952063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2953495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2954919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2956332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2957775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2959177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2960596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2962072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2963593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2965002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2966403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2967839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2969245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2970660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2972069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2973514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2974925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2976339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2977736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2979160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2980653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2982159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2983628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2985089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2986577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2987979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2989412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2990835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2992780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2994818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2997699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.2999617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3001869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3004393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3006609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3008669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3010906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3012719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3014470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3016570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3018767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3020787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3022721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3024768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3026678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3028575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3030883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3032449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3033862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3035302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3036714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3038125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3039570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3040969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3042406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3043789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3045220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3046619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3048019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3049552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3051082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3052469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3053882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3055302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3056727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3058119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3059518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3060950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3062367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3063786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3065208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3066643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3068145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3069650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3071058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3072489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3073912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3075358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3076766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3078173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3079608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3080993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3082414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3083821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3085275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3086755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3088267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3089672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3091100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3092498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3093920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3095812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3097307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3098741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3100156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3101607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3103051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3104488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3106074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3107639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3109076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3110517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3111944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3113383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3114800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3116298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3117722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3119167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 47%] 2024-08-07T18:08:33.3120601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3122026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3123458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3124964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3126514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3127928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3129365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3130794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3132250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3133661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3135100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3136549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3138002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3139429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3140853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3142295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3143803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3145367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3146788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3148229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3149662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3151092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3152502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3153949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3155400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3156831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3158250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3159701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3161129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3162615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3164136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3165574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3167017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3168431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3169868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3171289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3172746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3174155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3175610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3177040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3178494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3179905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3181416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3182916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3184330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3185780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3187200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3188687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3190107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3191537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3192957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3194391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3196285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3197750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3199157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3200689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3202250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3203760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3205176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3206640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3208100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3209522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3210967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3212408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3213861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3215283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3216801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3218224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3219718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3221233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3222703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3224127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3225602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3227052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3228466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3229914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3231356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3232781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3234201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3235658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3237091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3238566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3240072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3241579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3243013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3244458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3245892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3247327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3248785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3250222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3251654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3253080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3254532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3255960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3257454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3258986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3260485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3261897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3263338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3264762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3266232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3267647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3269073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3270508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3271934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3273363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3274798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3276311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3277829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3279313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3280737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3282206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3283626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3285058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3286545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3287991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3289413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3290822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3292275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3293679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3295689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3297336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3298832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3300253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3301702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3303111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3304537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3305968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3307449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3308861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3310285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3311742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3313181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3314659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3316217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3317740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3319175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3320621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3322045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3323482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3324909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3326365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3327778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3329250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3330691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3332107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3333579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3335092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3336643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3338060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3339498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3340925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3342367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3343776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3345217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3346684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3348138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3349558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3350988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3352461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3354027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3355509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3356945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3358388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3359811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3361232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3362637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3364082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3365502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3366953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3368372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3369809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3371290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3372804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3374299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3375777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3377256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3378672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3380098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3381527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3382984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3384397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3385832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3387283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3388734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3390197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3391722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3393193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3394608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3396493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3397948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3399378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3400800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3402231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3403631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3405072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3406498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3407943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3409447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3411016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3412510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3413935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3415393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3416866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3418327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3419742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3421182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3422599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3424053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3425470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3426918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3428380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3429905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3432138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3433564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3434988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3436437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3437867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3439290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3440736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3442155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3443590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3445010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3446474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3447938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3449451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3450907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3452347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3453775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3455210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3456623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3458067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3459517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3460925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3462357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3463773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3465211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3466606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3468125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3469627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3471054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3472457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3473890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3475301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3476740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3478175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3479577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3481015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3482445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3483876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3485283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3486799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3488337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3489764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3491181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3492625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3494052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3496068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3497534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3498968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3500405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3501819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3503237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3504651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3506224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3507761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3509209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3510629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3512075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3513473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3514897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3516365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3517818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3519240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3520656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3522099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3523518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3525022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3526514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3527950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3529389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3530823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3532235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3533661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3535105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3536526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3537929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3539368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3540789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3542185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3543684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3545189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3546617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3548029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3549467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3550878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3552318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3553731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3555168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3556590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3558053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3559471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3560882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3562400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3563926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3565346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3566761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3568221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3569643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3571056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3572461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3573901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3575316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3576739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3578173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3579603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3581111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3582603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3584025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3585447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3586887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3588303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3589731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3591149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3592588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3593986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3595794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3597248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3598711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3600240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3601776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3603178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3604595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3606017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3607414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3608865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3610283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3611701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3613098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3614539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3615949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3617404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3618916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3620453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3621871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3623282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3624719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3626141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3627589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3629023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3630461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3631876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3633322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3634741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3636170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3637657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3639243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3640639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3642065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3643486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3644909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3646327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3647734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3649174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3650592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3652021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3653437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3654874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3656411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3657931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3659336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3660776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3662219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3663644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3665051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3666475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3667947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3669355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3670787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3672206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3673638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3675112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3676625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3678057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3679501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3680951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3682385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3683796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3685220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3686657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3688090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3689530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3690955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3692368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3693852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3695838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3697305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3698758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3700172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3701613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3703027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3704448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3705860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3707262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3708715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3710123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3711538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3713065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3714611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3716130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3717558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3718995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3720441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3721822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3723239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3724663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3726097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3727506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3728945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3730382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3731857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3733376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3734860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3736300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3737731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3739182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3740589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3742017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3743437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3744847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3746247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3747667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3749112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3750551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3752080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3753541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3754978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3756380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3757812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3759234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3760671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3762074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3763488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3764931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3766372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3767786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3769237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3770759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3772226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3773640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3775046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3776472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3777886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 48%] 2024-08-07T18:08:33.3779302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3780708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3782131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3783543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3784957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3786358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3787842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3789357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3790803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3792222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3793636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3795393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3796882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3798331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3799769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3801194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3802605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3804023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3805444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3806956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3808506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3809984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3811403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3812820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3814236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3815635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3817108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3818563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3819978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3821385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3822816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3824229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3825643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3827122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3828662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3830079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3831482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3832917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3834333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3835775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3837185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3838636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3840054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3841492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3842894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3844315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3845793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3847310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3848722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3850126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3851560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3852968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3854378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3855783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3857218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3858657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3860078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3861484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3862905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3864402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3865898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3867291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3868742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3870165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3871558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3872983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3874402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3875843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3877236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3878684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3880093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3881514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3882979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3884475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3885878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3887307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3888723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3890113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3891533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3892949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3894366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3896149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3897614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3899058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3900472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3901994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3903544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3904959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3906381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3907801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3909241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3910656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3912059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3913489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3914892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3916368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3917783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3919203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3920689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3922214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3923609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3925038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3926458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3927903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3929299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3930704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3932142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3933557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3934977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3936384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3937828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3939305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3940877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3942337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3943769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3945191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3946615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3948042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3949468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3950886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3952277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3953696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3955115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3956543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3958006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3959513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3960968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3962394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3963790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3965213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3966622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3968081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3969489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3970887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3972330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3973751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3975171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3976617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3978161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3979624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3981041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3982448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3983876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3985282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3986698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3988127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3989551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3990966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3992359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3993777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3995533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3997114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.3998663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4000085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4001493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4002931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4004317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4005732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4007146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4008597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4009989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4011447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4012868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4014275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4015761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4017290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4018734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4020137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4021545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4022925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4024345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4025755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4027166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4028582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4030005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4031415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4032802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4034292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4035785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4037211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4038638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4040074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4041488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4042925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4044333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4045758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4047169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4048627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4050040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4051442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4052937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4054434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4055841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4057235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4058688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4060102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4061521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4062910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4064344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4065752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4067164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4068587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4070015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4071508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4073003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4074421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4075838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4077280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4078703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4080128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4081547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4083033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4084433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4085862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4087276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4088720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4090158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4091647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4093123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4094537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4096451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4097881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4099323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4100730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4102150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4103541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4104964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4106377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4107780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4109289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4110838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4112315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4113709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4115126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4116585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4118017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4119426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4120850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4122240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4123659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4125044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4126451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4127898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4129416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4130847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4132234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4133655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4135067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4136471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4137877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4139317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4140740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4142148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4143551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4144988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4146403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4147921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4149416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4150845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4152266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4153675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4155101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4156514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4157958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4159351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4160768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4162179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4163606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4164994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4166486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4167996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4169424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4170821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4172229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4173651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4175058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4176472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4177886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4179312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4180727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4182146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4183542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4185051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4186546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4187977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4189382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4190786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4192209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4193597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4195249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4196774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4198223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4199618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4201031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4202429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4203976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4205477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4206891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4208320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4209760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4211158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4212548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4213978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4215395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4216846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4218277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4219718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4221125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4222598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4224093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4225557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4226954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4228383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4229779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4231173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4232599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4233995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4235399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4236805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4238262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4239647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4241149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4242654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4244134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4245531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4246941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4248381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4249809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4251206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4252610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4254032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4255445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4256863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4258290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4259706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4261182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4262668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4264049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4265471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4266885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4268311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4269706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4271108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4272529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4273916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4275328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4276726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4278168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4279635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4281151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4282547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4283974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4285371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4286781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4288211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4289623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4291037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4292434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4293845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4295657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4297098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4298634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4300168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4301569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4302974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4304367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4305785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4307179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4308612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4310020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4311428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4312862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4314264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4315675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4317199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4318751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4320147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4321566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4322983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4324422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4325815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4327237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4328668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4330066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4331470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4332871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4334289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4335737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4337238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4338706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4340122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4341527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4342943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4344334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4345775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4347207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4348649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4350066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4351498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4352940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4354394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4355935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4357418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4358884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4360294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4361734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4363144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4364580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4365991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4367415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4368851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4370275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4371702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4373154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4374678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4376153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4377573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4378993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4380435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4381860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4383282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4384696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4386133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4387563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4389002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4390416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4391881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4394084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4395948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4397408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4398830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4400278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4401669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4403096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4404521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4405953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4407356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4408804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4410218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4411653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4413182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4414708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4416140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4417603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4419058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4420462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4421893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4423317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4424735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4426141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4427584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4429032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4430452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 49%] 2024-08-07T18:08:33.4431948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4433439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4434863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4436257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4437681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4439111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4440551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4441960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4443393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4444806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4446248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4447647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4449094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4450592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4452129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4453530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4454947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4456398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4457824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4459279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4460702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4462149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4463581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4465017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4466435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4467862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4469399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4470909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4472308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4473747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4475176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4476581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4478008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4479454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4480896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4482299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4483732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4485152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4486601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4488082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4489617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4491037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4492487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4493907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4495710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4497181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4498614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4500049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4501459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4502896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4504310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4505725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4507269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4508847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4510269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4511691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4513105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4514533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4515955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4517413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4518862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4520274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4521708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4523114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4524535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4526029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4527601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4529021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4530449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4531872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4533308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4534713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4536148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4537556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4538992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4540417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4541821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4543245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4544739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4546243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4547675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4549126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4550549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4551965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4553369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4554813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4556243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4557649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4559112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4560539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4561978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4563478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4565022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4566442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4567895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4569339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4570779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4572192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4573632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4575034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4576460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4577884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4579346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4580767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4582248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4583771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4585192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4586614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4588030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4589487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4590902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4592332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4593750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4595512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4596962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4598380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4599821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4601359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4602909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4604310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4605741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4607154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4608588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4609993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4611415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4612832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4614258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4615662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4617133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4618549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4620050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4621550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4622949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4624383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4625809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4627236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4628650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4630092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4631522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4632941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4634350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4635788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4637202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4638691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4640190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4641649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4643079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4644478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4645895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4647307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4648758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4650166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4651586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4652997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4654433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4655832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4657292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4658815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4660287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4661704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4663111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4664549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4665968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4667393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4668831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4670261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4671680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4673111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4674513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4675980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4677485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4678977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4680375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4681790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4683232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4684630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4686052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4687471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4688931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4690329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4691755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4693172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4694647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4696563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4698091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4699525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4700944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4702373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4703770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4705205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4706632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4708053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4709483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4710916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4712327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4713735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4715230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4716800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4718215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4719656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4721060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4722468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4723904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4725310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4726725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4728135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4729605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4731004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4732446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4733918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4735442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4736843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4738268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4739711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4741127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4742549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4743967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4745389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4746800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4748228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4749637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4751063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4752553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4754057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4755455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4756885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4758306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4759738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4761135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4762557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4763994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4765396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4766821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4768251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4769693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4771175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4772694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4774106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4775544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4776964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4778398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4779808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4781243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4782641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4784038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4785473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4786894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4788317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4789809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4791330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4792747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4794166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4795942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4797403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4798841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4800267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4801679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4803093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4804530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4805933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4807354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4808918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4810470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4811875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4813302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4814721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4816154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4817590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4819047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4820468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4821901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4823305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4824723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4826151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4827643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4829226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4830631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4832065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4833485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4834904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4836299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4837733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4839183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4840602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4842029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4843426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4844855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4846294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4847805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4849294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4850721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4852119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4853535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4854933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4856360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4857758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4859197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4860605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4862043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4863432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4864868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4866395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4867870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4869307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4870716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4872159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4873587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4875008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4876421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4877852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4879300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4880726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4882129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4883584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4885103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4886555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4887969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4889406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4890849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4892239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4893665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4895466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4896937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4898340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4899787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4901201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4902726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4904253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4905733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4907169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4908598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4910046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4911455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4912897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4914333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4915759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4917214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4918670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4920091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4921504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4922978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4924490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4925918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4927323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4928811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4930225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4931659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4933071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4934488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4935903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4937329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4938737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4940155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4941646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4943173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4944571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4946058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4947499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4948924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4950345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4951758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4953188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4954599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4956013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4957409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4958847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4960333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4961835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4963226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4964655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4966075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4967465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4968901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4970329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4971772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4973171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4974600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4976030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4977472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4978974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4980495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4981913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4983356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4984769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4986181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4987601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4989048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4990465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4991863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4993298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4994723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4996556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4998121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.4999738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5001153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5002571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5003981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5005407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5006816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5008212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5009660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5011072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5012505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5013910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5015329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5016859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5018389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5019810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5021235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5022643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5024070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5025456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5026852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5028279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5029713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5031124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5032527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5033948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5035400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5036904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5038352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5039803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5041219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5042637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5044032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5045468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5046898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5048295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5049737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5051156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5052585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5054023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5055537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5056992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5058415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5059824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5061248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5062653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5064087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5065490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5066886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5068306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5069738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5071160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5072615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5074130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5075597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5077016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5078418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5079875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5081283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5082707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5084119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5085548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 50%] 2024-08-07T18:08:33.5086965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5088376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5089807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5091211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5092711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5094189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5096301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5097784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5099237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5100622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5102038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5103446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5104868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5106261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5107719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5109168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5110576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5112127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5113633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5115055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5116504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5117932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5119334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5120759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5122180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5123594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5124986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5126415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5127820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5129207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5130739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5132240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5133659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5135044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5136463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5137857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5139284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5140677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5142088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5143487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5144920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5146316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5147712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5149227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5150730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5152145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5153545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5154980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5156432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5157850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5159272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5160701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5162101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5163505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5164905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5166309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5167805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5169322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5170732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5172130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5173566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5174955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5176360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5177770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5179235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5180631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5182052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5183476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5184914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5186357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5187864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5189367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5190788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5192211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5193621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5195447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5196964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5198386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5199806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5201236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5202654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5204126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5205608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5207149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5208653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5210080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5211504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5212928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5214355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5215757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5217238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5218662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5220130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5221538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5222973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5224433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5225972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5227419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5228823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5230272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5231691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5233101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5234503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5235944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5237353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5238763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5240193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5241620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5243027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5244532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5246018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5247419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5248847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5250267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5251678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5253088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5254521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5255913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5257329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5258736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5266859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5268364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5269955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5271474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5272956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5274348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5275769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5277190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5278622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5280021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5281439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5282841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5284253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5285674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5287095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5288613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5290127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5291543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5292955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5294396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5296418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5297896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5299316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5300755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5302175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5303574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5305008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5306411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5308025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5309544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5310961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5312371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5313808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5315209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5316677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5318119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5319564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5320960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5322379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5323804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5325216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5326699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5328217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5329708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5331135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5332567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5333977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5335407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5336841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5338270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5339674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5341100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5342521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5343917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5345376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5346897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5349114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5350509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5351933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5353350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5354779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5356175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5357612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5359022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5360455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5361858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5363277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5364734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5366254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5367727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5369131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5370566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5371997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5373418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5374826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5376250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5377664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5379079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5380481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5381907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5383309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5384783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5386299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5387709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5389146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5390549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5391960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5393377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5394819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5396569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5398021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5399449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5400891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5402289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5403824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5405383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5406804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5408255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5409679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5411104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5412518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5413936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5415336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5416871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5418373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5419798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5421193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5422703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5424206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5425596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5427017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5428462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5429890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5431280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5432706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5434111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5435538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5436933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5438379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5439786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5441329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5442823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5444221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5445635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5447041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5448472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5449858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5451281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5452691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5454096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5455504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5456923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5458367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5459798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5461295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5462771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5464184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5465581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5466995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5468431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5469874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5471270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5472693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5474098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5475539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5476931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5478415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5479907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5481381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5482762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5484155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5485579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5486982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5488416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5489814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5491229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5492636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5494042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5495696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5497222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5498781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5500260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5501646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5503057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5504497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5505886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5507308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5508747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5510177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5511570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5512989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5514385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5515799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5517299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5518821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5520229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5521651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5523048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5524442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5525872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5527276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5528709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5530104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5531525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5532944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5534352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5535829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5537347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5538775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5540191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5541596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5543000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5544427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5545818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5547242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5548656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5550077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5551468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5552867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5554333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5555846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5557223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5558651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5560050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5561455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5562858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5564248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5565675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5567085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5568524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5569922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5571346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5572803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5574311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5575755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5577179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5578617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5580030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5581432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5582827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5584253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5585632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5587036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5588466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5589891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5591319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5592819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5594270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5595980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5597389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5598855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_143_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5600276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5601719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5603165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5604590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5606047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5607489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5608949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5610465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5612046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5613564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5615005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5616436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5617933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5619366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5620803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5622229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5623659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5625114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5626540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5627987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5629459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5631002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5632473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5633910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5635355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5636812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5638250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5639706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5641149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5642611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5644046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5645481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5646939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5648453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5649989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5651470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5652915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5654355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5655798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5657211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5658693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5660135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5661571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5662995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5664459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5665893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5667361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5668908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5670402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5671851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5673267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5674717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5676145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5677597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5679040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5680480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5681911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5683364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5684778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5686287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5687800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5689279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5690705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5692123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5693578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5695244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5696702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5698140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5699580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5701005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5702440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5703851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5705380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5706943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5708480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5709887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5711326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5712795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5714217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5715665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5717151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5718645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5720078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5721536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5722958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5724456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5725958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5727493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5728947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5730394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5731832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5733252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5734705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5736153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5737585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5739038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5740502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5741941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5743425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5744964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5746475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5747917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 51%] 2024-08-07T18:08:33.5749388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5750825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5752264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5753724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5755159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5756608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5758041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5759514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5760924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5762432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5763947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5765403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5766824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5768293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5769729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5771195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5772616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5774053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5775502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5776942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5778400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5779826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5781359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5782884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5784318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5785744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5787198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5788658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5790097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5791524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5792977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5794414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5796069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5797526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5798974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5800539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5802066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5803499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5804918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5806371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5807792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5809243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5810679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5812144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5813565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5815009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5816452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5817939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5819498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5821020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5822471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5823913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5825364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5826796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5828240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5829690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5831138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5832545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5834000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5835453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5838209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5840972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5843758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5846496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5849182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5851870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5854565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5858271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5861165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5863859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5866521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5869229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5871929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5874598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5878412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5881308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5883967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5886642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5889333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5892014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5894676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5899236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5902028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5904711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5907384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5910074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5912737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5915965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5919984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5922716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5925385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5928084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5930792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5933453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5936359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5939567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5942282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5944964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5947647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5950350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5953161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5955919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5958614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5961317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5963991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5966660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5969327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5972000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5974675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5977354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5980050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5982740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5985427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5988169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5990920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5993615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5996690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.5999393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6002079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6004798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6007466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6010147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6012829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6015540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6018271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6020939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6023731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6026527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6029185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6031860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6034597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6037291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6039942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6042605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6045281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6047957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6050645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6053338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6055996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6058754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6061531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6064185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6066859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6069553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6072251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6074894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6077600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6080299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6082967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6085646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6088320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6090998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6093734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6096786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6099463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6102151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6104860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6107497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6110184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6112868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6115539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6118243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6120919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6123622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6126276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6129110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6131939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6134631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6137314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6139995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6142665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6145356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6148061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6150768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6153446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6156123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6158817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6161471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6164224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6167018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6169688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6172329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6175013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6177731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6180399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6183081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6185766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6188460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6191126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6193825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6196786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6199601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6202487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6205177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6207835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6210538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6213227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6215896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6218616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6221326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6224054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6226714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6229397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6232082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6234831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6237582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6240256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6242950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6245617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6248281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6250975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6253668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6256349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6259019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6261702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6264408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6267097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6269853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6272627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6275298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6278026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6280716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6283396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6286076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6288743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6291391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6294054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6297046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6299737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6302410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6305188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6307971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6310638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6313300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6315983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6318730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6321393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6324072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6326729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6329406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6332092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6334768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6337436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6340206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6342963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6345681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6348342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6351055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6353718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6356355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6358996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6361677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6364337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6366992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6369697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6372370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6375086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6377849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6380515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6383202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6385890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6388584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6391262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6393940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6396936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6399616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6402305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6405020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6407724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6410504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6413304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6415979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6418686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6421352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6424059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6426747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6429422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6432095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6434758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6437443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6440122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6442789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6445544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6448328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6451020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6453684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6456367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6459064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6461723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6464419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6467115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6469830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6472514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6475195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6477847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6480619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6483387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6486041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6488717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6491410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6494066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6497048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6499746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6502449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6505126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6507791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6510474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6513132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6515971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6518814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6521490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6524201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6526889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6529538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6532206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6534909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6537608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6540273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6542962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6545625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6548274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6551033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6553793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6556489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6559151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6561793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6564455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6567124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6569815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6572475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6575164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6577854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6580520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6583185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6585919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6588709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6591464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6594159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6597147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6599856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6602533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6605211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6607892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6610563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6613239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6615921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6618631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6621419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6624308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6627045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6629716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6632414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6635093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6637752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6640423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6643161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6645819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6648457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6651158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6653840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6656553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6659316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6662054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6664726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6667405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6670102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6672763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6675439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6678101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6680723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6683386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6686073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6688746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6691436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6694229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6697281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6699921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6702575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6705255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6707958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6710639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6713295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6715958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6718689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6721368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6724069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6726830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6730441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6733166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6735865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6738534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6741200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6743893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6746539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6749188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6751856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6754518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6757176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6759837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6762576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6765376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6768066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6770757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6773446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6776117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6778779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6781477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6784155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6786820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6789486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6792164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6794845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6797911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6800714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6803450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6806120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6808774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6811436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6814087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6816756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6819485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6822134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6824816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6827502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6830153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6832851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6835585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6838318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6840969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6843639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6846310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6848979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6851637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6854301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6856977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6859649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6862319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6864995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6867650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6870444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6873154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6875788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6878456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6881129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6883765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6886406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6889062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6891731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6894385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6897330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6899985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6902660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6905445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6908217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 52%] 2024-08-07T18:08:33.6910919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6913607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6916284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6918968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6921642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6924364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6927028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6929711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6932377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6935023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6937667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6940386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6943137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6945808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6948502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6951146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6953792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6956465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6959129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6961772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6964446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6967160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6969865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6972544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6975318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6978135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6980828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6983511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6986218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6988926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6991621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6994317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6997295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.6999991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7002661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7005363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7008043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7010866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7013681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7016374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7019094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7021788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7024508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7027192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7029891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7032588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7035270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7037937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7040657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7043439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7046231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7048971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7051663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7054356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7057061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7059755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7062450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7065155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7067834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7070500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7073184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7075883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7078628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7081396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7084137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7086827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7089514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7092195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7094870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7097905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7100604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7103271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7105930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7108634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7111326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7114066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7116976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7119760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7122424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7125110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7127801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7130499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7133174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7135882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7138542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7141206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7143900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7146568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7149280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7152059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7154796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7157433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7160124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7162860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7165547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7168229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7170930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7173673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7176371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7179065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7181761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7184521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7187309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7190051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7192736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7195675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7198382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7199794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7201249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7202691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7204144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7205572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7207025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7208457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7209968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7211518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7213044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7214497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7216015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7217485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7218940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7220380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7221816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7223258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7224717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7226180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7227601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7229092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7230624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7232122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7233529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7234987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7236424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7237873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7239292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7240741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7242169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7243605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7245069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7246489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7247971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7249493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7250978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7252399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7253853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7255307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7256738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7258157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7259616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7261049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7262489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7263927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7265353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7266881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7268384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7269862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7271287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7272739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7274170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7275603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7277028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7278481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7279888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7281326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7282763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7284238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7285702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7287218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7288725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7290171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7291621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7293057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7294556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7296244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7297717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7299140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7300587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7302024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7303452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7304981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7306537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7308054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7309475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7310922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7312353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7313805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7315244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7316706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7318163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7319619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7321038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7322469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7323935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7325495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7326955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7328372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7329827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7331262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7332694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7334114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7335565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7336988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7338418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7339832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7341273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7342774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7344301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7345711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7347157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7348588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7350007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7351432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7352861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7354332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7355746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7357183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7358619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7360068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7361557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7363151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7364600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7366055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7367479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7368949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7370349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7371775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7373213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7374685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7376134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7377571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7378999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7380496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7382026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7383450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7384901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7386326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7387773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7389198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7390642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7392064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7393487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7394967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7396730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7398164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7399708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7401282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7402702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7404149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7405591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7407038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7408443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7409880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7411316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7412748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7414191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7415616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7417091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7418603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7420123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7421536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7422980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7424434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7425867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7427273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7428719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7430149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7431578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7432999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7434441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7435952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7437409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7438936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7440419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7441863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7443270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7444727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7446147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7447597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7449027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7450450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7451877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7453325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7454766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7456221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7457770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7459255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7460688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7462117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7463571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7465031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7466487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7467908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7469352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7470794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7472235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7473654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7475164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7476683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7478152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7479582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7481011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7482461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7483867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7485325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7486754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7488201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7489615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7491100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7492523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7494014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7495808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7497323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7498774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7500215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7501660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7503083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7504527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7505994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7507419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7508845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7510291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7511718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7513192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7514737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7516237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7517710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7519137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7520575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7521993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7523443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7524876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7526310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7527736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7529195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7530605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7532107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7533636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7535163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7536626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7538081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7539521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7540942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7542378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7543812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7545274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7546699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7548135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7549544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7551040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7552549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7554085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7555527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7556977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7558408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7559839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7561252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7562679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7564172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7565606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7567044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7568464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7569955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7571455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7572934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7574351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7575826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7577238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7578670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7580083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7581526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7582920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7584327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7585842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7587260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7588724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7590237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7591720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7593145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7594575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7596257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7597717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7599153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7600597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7602012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7603446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7604929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7606348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7607852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7609412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7610935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7612352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7613794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7615230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7616669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7618114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7619568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7620988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7622439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7623857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7625293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7626778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7628303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7629802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7631219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7632666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7634123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7635578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7637003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7638459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7639890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7641329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7642759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7644206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7645703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7647220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7648707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7650131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7651590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7652996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7654427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7655872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7657327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7658730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7660164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7661593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7663034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7664498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7666045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7667514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7668936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7670368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7671784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7673225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7674656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7676114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7677528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7678972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7680404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7681829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7683286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7684811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7686284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7687680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7689106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7690536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7691970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7693378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7694845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7696550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7697997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7699413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7700834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7702334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 53%] 2024-08-07T18:08:33.7703907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7705412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7706836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7708266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7709714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7711146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7712572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7714025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7715479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7716921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7718395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7719832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7721301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7722822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7724300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7725764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7727201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7728640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7730054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7731480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7732935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7734346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7735808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7737234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7738672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7740120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7741638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7743106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7744548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7745984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7747434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7748851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7750300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7751722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7753137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7754572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7756017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7757429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7758884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7760411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7761886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7763304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0004s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7764720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7766262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7767687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7769119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7770539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7771964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7773407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7774831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7776275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7777744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7779285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7780746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7782180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7783605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7785140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7786547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7787983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7789401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7790835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7792239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7793648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7795325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7796840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7798395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7799883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7801319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7802745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7804181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7805597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7807041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7808468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7809892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7811304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7812751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7814184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7815657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7817218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7818698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7820149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7821564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7823010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7824418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7825883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7827293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7828710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7830122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7831568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7832976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7834428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7835981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7837453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7838878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7840288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7841733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7843147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7844568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7846008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7847449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7848890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7850304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7851715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7853145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7854704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7856268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7857694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7859110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7860555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7861942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7863366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7864789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7866244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7867639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7869078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7870487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7871917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7873411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7874903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7876355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7877782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7879216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7880627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7882070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7883503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7884934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7886366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7887809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7889242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7890672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7892158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7894303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7895959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7897386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7898820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7900237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7901677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7903087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7904522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7905946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7907392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7908798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7910231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7911770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7913321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7914727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7916175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7917647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7919083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7920505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7921925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7923371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7924796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7926256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7927679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7929106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7930596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7932102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7933503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7934941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7936396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7937797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7939229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7940645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7942087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7943487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7944917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7946365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7947806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7949291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7958907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7960454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7961926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7963351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7964811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7966237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7967683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7969101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7970533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7971945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7973375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7974817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7976351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7977885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7979301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7980721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7982133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7983571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7985012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7986440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7987850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7989274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7990696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7992124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7993520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7995372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7997020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7998427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.7999857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8001276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8002714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8004113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8005573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8006973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8008396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8009789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8011210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8012616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8014083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8015618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8017096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8018565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8019988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8021419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8022809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8024253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8025701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8027119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8028525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8029969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8031395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8032861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8034367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8035853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8037291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8038705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8040143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8041557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8042996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8044403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8045826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8047241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8048685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8050086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8051556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8053056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8054546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8055966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8057378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8058822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8060247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8061676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8063096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8064545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8065976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8067407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8068836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8070311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8071831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8073351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8074780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8076204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8077648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8079042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8080464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8081886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8083322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8084751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8086232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8087652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8089086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8090566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8092106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8093514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8094949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8096648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8098053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8099495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8100925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8102345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8103757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8105222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8106645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8108056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8109575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8111126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8112535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8113946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8115376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8116782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8118256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8119667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8121094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8122505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8123948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8125375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8126813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8128284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8129818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8131209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8132636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8134062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8135508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8136935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8138353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8139796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8141218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8142644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8144066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8145520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8147020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8148531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8149930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8151370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8152794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8154210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8155640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8157056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8158493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8159890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8161316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8162739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8164163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8165702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8167220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8168630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8170070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8171476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8172906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8174324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8175790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8177193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8178595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8180019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8181439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8182841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8184309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8185844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8187252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8188667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8190074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8191502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8192908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8194323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8195995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8197416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8198856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8200263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8201683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8203166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8204725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8206218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8207643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8209064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8210510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8211928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8213341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8214765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8216211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8217632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8219044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8220465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8221914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8223424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8224900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8226325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8227740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8229161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8230559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8231992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8233415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8234839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8236250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8237672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8239106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8240566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8242084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8243550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8245002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8246409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8247846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8249243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8250671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8252071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8253482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8254899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8256315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8257726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8259123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8260682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8262181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8263592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8265012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8266448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8267854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8269272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8270680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8272106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8273517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8274962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8276368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8277770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8279275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8280763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8282178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8283577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8285073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8286460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8287870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8289274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8290701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8292085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8293504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8294936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8296846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8298402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8299934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8301361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8302771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8304201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8305623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8307055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8308479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8309901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8311301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8312734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8314189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8315632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8317115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8318651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8320079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8321468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8322888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8324302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8325755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8327152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8328578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8329983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8331410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8332810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8334224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8335751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8337275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8338705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8340135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8341585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8343040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8344461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8345906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8347359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8348785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8350221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8351648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8353080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8354574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8356101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8357507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8358931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8360385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8361792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8363220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8364641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8366110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8367520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 54%] 2024-08-07T18:08:33.8368961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8370398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8371845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8373336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8374870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8376296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8377747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8379170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8380588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8382034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8383468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8384925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8386340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8387777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8389210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8390630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8392109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8393641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8395411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8396866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8398283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8399730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8401157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8402572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8404012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8405454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8406899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8408313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8409738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8411274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8412835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8414239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8415694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8417120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8418615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8420023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8421442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8422879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8424290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8425735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8427149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8428578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8430075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8431590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8432995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8434461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8435902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8437335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8438740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8440188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8441626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8443039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8444478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8445939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8447386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8448891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8450416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8451839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8453308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8454711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8456167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8457581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8459023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8460443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8461882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8463311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8464761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8466190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8467718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8469244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8470672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8472100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8473520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8474981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8476410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8477847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8479266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8480720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8482147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8483591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8485036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8486531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8488070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8489482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8490922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8492353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8493797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8495498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8496956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8498388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8499831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8501241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8502680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8504099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8505672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8507216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8508627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8510063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8511493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8512925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8514328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8515792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8517223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8518691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8520104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8521548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8522976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8524486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8526006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8527412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8528842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8530240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8531658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8533066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8534504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8535938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8537362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8538777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8540221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8541619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8543084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8544605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8546122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8547538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8548961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8550411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8551843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8553291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8554717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8556165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8557602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8559043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8560455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8561937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8563447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8564934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8566397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8567847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8569280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8570682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8572111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8573543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8575007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8576420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8577862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8579281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8580763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8582255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8583753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8585202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8586659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8588072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8589489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8590936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8592377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8593806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8595522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8597038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8598455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8599950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8601481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8602992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8604411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8605865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8607279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8608772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8610197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8611613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8613039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8614466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8615960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8617379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8618874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8620386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8621893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8623299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8624732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8626183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8627627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8629044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8630492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8631906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8633331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8634756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8636178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8637661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8639179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8640656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8642065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8643508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8644937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8646374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8647784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8649228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8650648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8652058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8653494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8654935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8656436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8657940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8659427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8660849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8662326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8663723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8665172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8666584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8668028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8669429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8670861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8672285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8673703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8675193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8676701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8678187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8679608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8681039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8682455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8683891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8685341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8686781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8688193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8689635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8691062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8692495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8693970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8695749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8697283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8698687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8700114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8701524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8702960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8704365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8705793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8707201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8708635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8710046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8711476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8712918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8714427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8715954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8717390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8718841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8720276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8721697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8723133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8724584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8726008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8727439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8728861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8730309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8731736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8733271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8734774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8736184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8737619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8739026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8740447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8741870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8743333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8744738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8746161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8747576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8749017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8750414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8751948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8753516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8754955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8756400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8757823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8759261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8760687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8762137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8763550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8764989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8766418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8767852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8769264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8770771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8772290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8773707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8775116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8776540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8777979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8779386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8780822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8782264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8783707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8785115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8786549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8787960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8789475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8790967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8792421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8793845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8795553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8796977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8798394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8799845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8801278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8802732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8804152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8805591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8807005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8808552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8810080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8811515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8812966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8814398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8815803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8817241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8818720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8820127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8821555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8822993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8824431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8825829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8827327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8828837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8830275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8831684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8833142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8834557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8835993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8837404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8838818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8840240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8841655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8843097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8844499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8846023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8847561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8848976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8850386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8851829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8853258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8854674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8856085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8857531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8858958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8860372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8861819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8863249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8865341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8866896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8868336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8869764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8871220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8872644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8874085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8875506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8876946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8878347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8879792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8881199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8882637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8884163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8885664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8887100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8888524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8889957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8891373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8892839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8894264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8895963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8897432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8898884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8900380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8901791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8903362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8904903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8906343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8907759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8909197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8910615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8912054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8913498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8914926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8916349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8917826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8919244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8920668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8922163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8923695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8925116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8926520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8927956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8929377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8930794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8932201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8933664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8935082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8936508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8937922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8939355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8940851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8942379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8943806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8945213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8946651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8948053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8949469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8950883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8952323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8953736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8955159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8956581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8958030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8959476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8960988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8962460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8963909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8965344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8966775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8968218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8969648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8971085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8972495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8973952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8975389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8976818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8978279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8979804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8981271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8982707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8984133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8985561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8987002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8988411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8989848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8991260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8992715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8994129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8995870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8997372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.8998932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9000404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9001824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9003257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9004707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9006110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9007524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9008961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9010384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9011819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9013257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9014682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9016133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9017697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9019155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9020580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9021996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9023443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9024845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 55%] 2024-08-07T18:08:33.9026252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9027689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9029080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9030498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9031913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9033369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9034814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9036333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9037802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9039244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9040653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9042089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9043530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9044970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9046378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9047785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9049213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9050634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9052044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9053508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9055082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9056555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9057964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9059371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9060807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9062215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9063655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9065066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9066482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9067919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9069322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9070745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9072155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9073690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9075180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9076600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9078015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9079464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9080872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9082302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9083747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9085160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9086554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9087970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9089393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9090803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9092301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9093813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9095489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9096919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9098348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9099746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9101175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9102672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9104104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9105504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9106924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9108365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9109768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9111314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9112845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9114281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9115685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9117119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9118552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9119980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9121376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9122805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9124212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9125640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9127039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9128432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9129938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9131440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9132855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9134255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9135697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9137117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9138537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9139948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9141388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9142815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9144239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9145654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9147069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9148585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9150097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9151526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9152954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9154398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9155792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9157215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9158631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9160072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9161464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9162903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9164318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9165757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9167225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9168714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9170152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9171571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9173018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9174433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9175871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9177295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9178734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9180144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9181585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9183039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9184463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9185955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9187473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9188881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9190280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9191711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9193153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9194588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9196283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9197744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9199156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9200592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9202001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9203448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9204939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9206502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9207976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9209392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9210842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9212273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9213720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9215138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9216592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9218060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9219498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9220917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9222345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9223828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9225356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9226807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9228242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9229669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9231081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9232501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9233956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9235390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9236793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9238240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9239656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9241091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9242530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9244054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9245519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9246956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9248362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9249777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9251206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9252625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9254068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9255472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9256898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9258315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9259733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9261177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9262718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9264165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9265571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9266973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9268406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9269819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9271217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9272656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9274075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9275515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9276925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9278356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9279775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9281301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9282809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9284247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9285672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9287125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9288528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9289942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9291382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9292808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9294224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9295858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9297312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9298731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9300286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9301812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9303261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9304683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9306111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9307553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9308994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9310419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9311824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9313277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9314705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9316149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9317589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9319099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9320608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9322044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9323480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9324918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9326330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9327764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9329161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9330586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9332003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9333447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9334877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9336285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9337792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9339293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9340711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9342122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9343585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9345000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9346415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9347824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9349268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9350719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9352125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9353577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9354991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9356542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9358032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9359458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9360861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9362298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9363709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9365128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9366540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9367970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9369365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9370785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9372202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9373639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9375146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9376635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9378069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9379494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9380922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9382333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9383785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9385212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9386650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9388056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9389500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9390930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9392333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9393859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9395623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9397069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9398468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9399898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9401318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9402754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9404171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9405613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9407024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9408458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9409862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9411280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9412776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9414316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9415803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9417204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9418678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9420104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9421524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9422943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9424393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9425805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9427225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9428636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9430057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9431511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9433002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9434485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9435915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9437344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9438746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9440161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9441575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9443030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9444424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9445840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9447255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9448687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9450127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9451693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9453169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9454592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9456016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9457434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9458862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9460279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9461704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9463125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9464548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9465989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9467399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9468806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9470312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9471815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9473231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9474656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9476075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9477501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9478898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9480324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9481736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9483193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9484597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9486019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9487432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9488942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9490434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9491860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9493299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9494728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9496404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9497829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9499266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9500675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9502086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9503506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9504937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9506345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9507887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9509430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9510851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9512262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9513689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9515107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9516516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9517992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9519395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9520812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9522223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9523696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9525089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9526587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9528083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9529505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9530904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9532327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9533754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9535156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9536557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9537960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9539375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9540784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9542204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9543621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9545117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9546616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9548018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9549407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9550840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9552257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9553672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9555094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9556517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9557950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9559354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9560784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9562190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9563693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9565181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9566660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9568064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9569504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9570904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9572315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9573742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9575163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9576565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9577966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9579396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9580802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9582258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9583779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_2048_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9585274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9586701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9588135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9589563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9591009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9592442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9593880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9595566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9597023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9598489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9599937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9601456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9602995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9604507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9605908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9607337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9608761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9610205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9611618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9613068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9614488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9615919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9617354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9618805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9620299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9621828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9623320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9624735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9626185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9627626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9629059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9630487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9631942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9633398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9634832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9636258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9637677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9639158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9640655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9642128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9643571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9645019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9646437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9647870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9649294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9650743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9652147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9653606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9655028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9656479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9657989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9659493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9660979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9662398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9663847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9665272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9666706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9668131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9669571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9670980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9672412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9673849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9675322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9676765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9678271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9679758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 56%] 2024-08-07T18:08:33.9681161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9682588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9684029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9685467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9686869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9688301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9689720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9691168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9692585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9694037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9695810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9697416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9698900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9700320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9701772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9703225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9704660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9706077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9707520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9708938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9710362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9711775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9713222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9714688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9716209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9717725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9719167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9720597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9722035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9723466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9724895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9726354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9727766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9729201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9730635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9732113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9733572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9735108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9736593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9738042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9739465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9740894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9742354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9743780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9745213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9746620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9748067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9749501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9750931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9752412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9753947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9755419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9756845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9758259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9759698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9761113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9762543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9763978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9765396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9766838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9768258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9769696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9771153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9772705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9774162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9775591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9776999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9778438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9779833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9781256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9782694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9784110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9785533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9786957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9788379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9789794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9791287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9792819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9794254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9795970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9797472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9798884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9800354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9801813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9803248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9804664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9806099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9807554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9809050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9810613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9812713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9814157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9815625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9817069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9818534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9819995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9821421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9822872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9824299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9825730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9827159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9828580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9830091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9831624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9833064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9834481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9835931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9837357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9838784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9840217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9841647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9843098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9844533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9846009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9847425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9848938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9850429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9851856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9853297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9854750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9856155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9857583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9859018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9860450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9861857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9863314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9864734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9866177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9867677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9869180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9870616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9872049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9873508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9874918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9876359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9877797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9879221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9880634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9882069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9883502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9884926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9886409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9887914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9889347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9890756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9892193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9893620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9895333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9896764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9898192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9899613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9901062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9902478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9903908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9905450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9907009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9908413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9909828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9911273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9912711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9914141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9915560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9917002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9918462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9919888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9921297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9922749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9924253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9925764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9927170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9928606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9930035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9931437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9932889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9934311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9935751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9937151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9938583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9940002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9941437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9942954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9944507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9945925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9947371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9948784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9950192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9951617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9953064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9954492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9955888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9957319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9958741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9960156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9961659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9963209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9964640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9966056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9967468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9968905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9970326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9971739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9973212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9974636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9976086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9977504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9978938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9980435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9981969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9983392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9984830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9986250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9987690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9989089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9990516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9991949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9993376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9994801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9996484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9997926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:33.9999462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0001020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0002465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0003913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0005344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0006767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0008177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0009623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0011044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0012474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0013906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0015331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0016774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0018299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0019822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0021264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0022709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0024111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0025537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0026949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0028389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0029786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0031204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0032641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0034072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0035484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0036959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0038484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0039906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0041323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0042768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0044199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0045618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0047050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0048468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0049904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0051327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0052786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0054197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0055699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0057231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0058630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0060050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0061469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0062937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0064330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0065757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0067186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0068610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0070014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0071447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0072876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0074368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0075877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0077279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0078711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0080136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0081560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0082985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0084418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0085843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0087261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0088669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0090107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0091501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0092975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0094463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0096219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0097675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0099085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0100505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0101910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0103363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0104769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0106202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0107608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0109060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0110462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0111973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0113525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0115042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0116449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0117908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0119356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0120781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0122228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0123649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0125076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0126489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0127912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0129308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0130780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0132300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0133778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0135181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0136591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0138034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0139429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0140852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0142308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0143736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0145145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0146584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0147999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0149484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0150993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0152498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0153919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0155366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0156784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0158195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0159630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0161055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0162492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0163959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0165454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0166862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0168321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0169815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0171315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0172756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0174185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0175638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0177064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0178481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0179886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0181303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0182742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0184188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0185589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0187054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0188558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0190049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0191445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0192903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0194304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0195972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0197379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0198785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0200209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0201621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0203063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0204508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0206010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0207547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0209026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0210424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0211861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0213314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0214735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0216145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0217595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0219062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0220474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0221910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0223346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0224788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0226284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0227803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0229216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0230652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0232056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0233481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0234897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0236344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0237743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0239168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0240593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0242017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0243451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0244919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0246483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0247888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0249308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0250718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0252162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0253584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0255017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0256411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0257840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0259261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0260668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0262111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0263590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0265100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0266487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0267910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0269325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0270753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0272172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0273597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0275074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0276518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0277917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0279343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0280744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0282262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0283782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0285183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0286628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0288048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0289474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0290877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0292350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0293766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0295422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0296849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0298282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0299683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0301197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0302752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0304161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0305592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0306997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0308411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0309814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0311247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0312666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0314075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0315490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0316923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0318356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0319857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0321371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0322798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0324218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0325633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0327061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0328475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0329908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0331319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0332752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0334162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0335578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 57%] 2024-08-07T18:08:34.0336970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0338471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0339975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0341386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0342804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0344214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0345648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0347044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0348465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0349876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0351300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0352710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0354135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0355539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0357042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0358547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0359968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0361378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0362805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0364224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0365675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0367101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0368513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0369922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0371319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0372758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0374154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0375604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0377086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0378567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0379975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0381412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0382819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0384234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0385672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0387079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0388499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0389915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0391371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0392769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0394228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0396035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0397625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0399016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0400443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0401878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0403288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0404705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0406113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0407533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0408947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0410367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0411784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0413265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0414791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0416255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0417651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0419139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0420581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0422037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0423452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0424888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0426341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0427758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0429206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0430630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0432156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0433680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0435164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0436575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0438021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0439436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0440861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0442309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0443765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0445170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0446586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0448036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0449460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0450925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0452451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0453944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0455375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0456812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0458234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0459683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0461116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0462576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0463991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0465419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0466882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0468295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0469777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0471286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0472786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0474196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0475652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0477088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0478535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0479948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0481400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0482832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0484275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0485691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0487102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0488586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0490105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0491592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0493010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0494452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0496168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0497610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0499026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0500478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0501924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0503354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0504773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0506201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0507706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0509233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0510726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0512162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0513609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0515018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0516440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0517889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0519343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0520740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0522190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0523618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0525067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0526524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0528050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0529527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0530959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0532420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0533895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0535341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0536771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0538217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0539782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0541263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0542717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0544143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0545597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0547159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0548634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0550041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0551475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0552923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0554367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0555777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0557218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0558638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0560088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0561505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0562961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0564429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0565970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0567456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0568888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0570319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0571768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0573202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0574624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0576072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0577493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0578917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0580333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0581798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0583269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0584786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0586253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0587690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0589122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0590559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0591991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0593417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0594865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0596765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0598218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0599650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0601098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0602615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0604177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0605666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0607113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0608533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0609981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0611393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0612861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0614273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0615683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0617120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0618596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0620024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0621478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0623039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0624503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0625923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0627339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0628791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0630220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0631656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0633101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0634547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0635991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0637416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0638866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0640344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0641905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0643397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0644840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0646266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0647717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0649122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0650552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0651981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0653430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0654847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0656327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0657764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0659236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0660757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0662234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0663679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0665100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0666537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0667946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0669395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0670825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0672269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0673686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0675134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0676563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0678017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0679543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0681006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0682456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0683862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0685295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0686708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0688153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0689562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0690987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0692426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0693905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0695568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0697030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0698625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0700170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0701593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0703045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0704486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0705914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0707343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0708765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0710205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0711634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0713091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0714510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0715941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0717463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0718969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0720397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0721830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0723290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0724692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0726123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0727546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0728993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0730391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0731822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0733265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0734705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0736181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0737695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0739119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0740551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0741987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0743418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0744865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0746303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0747733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0749151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0750588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0752002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0753428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0754912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0756437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0757852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0759279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0760702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0762136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0763580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0764983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0766427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0767827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0769270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0770672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0772104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0774142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0775695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0777096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0778523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0779948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0781367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0782810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0784226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0785657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0787076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0788487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0789894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0791318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0799005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0800692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0802118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0803567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0805003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0806420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0807821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0809292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0810725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0812144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0813563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0815011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0816433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0817981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0819545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0820972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0822419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0823853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0825264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0826670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0828110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0829546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0830964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0832378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0833820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0835217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0836699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0838227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0839678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0841100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0842517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0843973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0845393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0846860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0848269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0849724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0851152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0852584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0853992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0855538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0857073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0858476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0859921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0861337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0862767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0864162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0865588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0867005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0868436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0869848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0871282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0872688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0874234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0875730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0877146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0878582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0880014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0881436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0882846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0884284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0885714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0887142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0888568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0890019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0891440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0892940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0894471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0896235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0897672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0899111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0900523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0901936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0903377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0904801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0907524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0910198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0912902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0915574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0918414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0921198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0923871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0926514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0930606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0933296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0936020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0938676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0941310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0943962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0946877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0950697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0953347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0956098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0958857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0961491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0964120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0968163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0971064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0973718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0976342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0979028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0981685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0984342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0987420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0991010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0993770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0996967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.0999718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1002390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1005078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1008025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1011120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1013806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1016508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1019225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1021893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1024564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1027242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1029982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1032740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1035470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1038141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1040803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1043453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1046084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1048813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1051481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1054148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1056816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1059495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1062130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1064820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1067612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1070329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1072990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1075651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1078338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1081018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1083695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1086358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 58%] 2024-08-07T18:08:34.1089010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1091673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1094326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1097218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1099962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1102780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1105508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1108162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1110822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1113486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1116139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1118834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1121517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1124189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1126846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1129481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1132131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1134871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1137617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1140342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1143001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1145665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1148316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1150962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1153621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1156284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1158939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1161589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1164219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1166881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1169561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1172287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1174985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1177658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1180301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1182966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1185599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1188273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1190953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1193599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1196568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1199281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1201937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1204670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1207484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1210240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1212907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1215587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1218305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1220940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1223595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1226272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1228923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1231600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1234293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1236933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1239572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1242334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1245097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1247731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1250391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1253083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1255725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1258362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1261027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1263687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1266337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1268973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1271645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1274303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1277030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1279791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1282436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1285069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1287713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1290356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1292992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1295894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1298592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1301229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1303866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1306567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1309212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1311960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1314728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1317384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1320067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1322716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1325379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1328045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1330701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1333376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1336012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1338684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1341339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1343994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1346726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1349524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1352139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1354768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1357415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1360090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1362751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1365399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1368036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1370669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1373310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1375946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1378591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1381355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1384095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1386743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1389402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1392080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1394735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1397719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1400415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1403064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1405719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1408363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1411018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1413694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1416460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1419271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1421922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1424584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1427230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1429869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1432524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1435174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1437843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1440505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1443148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1445818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1448460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1451171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1453889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1456570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1459230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1461875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1464529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1467214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1469844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1472491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1475151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1477793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1480431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1483045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1485763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1488515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1491140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1493804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1496712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1499361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1501967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1504591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1507247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1509900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1512564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1515215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1517843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1520665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1523444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1526160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1528812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1531500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1534153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1536773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1539433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1542087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1544718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1547369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1550033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1552667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1555363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1558089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1560782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1563440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1566091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1568730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1571371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1574054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1576706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1579342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1581995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1584660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1587315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1590000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1592765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1595758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1598463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1601134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1603775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1606419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1609059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1614045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1620596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1625171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1627822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1630511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1633195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1635892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1638532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1641181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1643835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1646504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1649135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1651774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1654430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1657228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1660010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1662749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1665405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1668089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1670816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1673467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1676118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1678775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1681437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1684060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1686726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1689400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1692117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1694874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1697925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1700574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1703211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1705867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1708515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1711171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1713804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1716442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1719104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1721764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1724426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1727163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1729941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1732666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1735295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1737928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1740555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1743237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1745862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1748474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1751075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1753713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1756344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1758961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1761673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1764418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1767077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1769716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1772359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1775022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1777664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1780323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1783007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1785676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1788327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1790981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1793683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1796751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1799556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1802271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1804921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1807582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1810222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1812843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1815490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1818154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1820841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1823556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1826202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1828856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1831558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1834329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1837039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1839699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1842358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1845001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1847658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1850319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1852985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1855652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1858324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1860999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1863651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1866333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1869135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1871862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1874510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1877161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1879810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1882446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1885107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1887795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1890432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1893110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1896018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1898719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1901348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1904161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1906966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1909659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1912340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1914999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1917624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1920301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1922960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1925622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1928261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1930924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1933541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1936178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1938867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1941592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1944264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1946903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1949541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1952151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1954774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1957413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1960037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1962647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1965289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1967972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1970612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1973291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1976031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1978746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1981374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1984013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1986675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1989334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1992027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1994664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.1997574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2000207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2002849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2005492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2008198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2010994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2013694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2016302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2019015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2021679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2024323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2026952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2029595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2032228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2034858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2037492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2040143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2042846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2045608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2048267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2050901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2053565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2056202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2058836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2061471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2064092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2066714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2069330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2071978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2074628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2077310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2080049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2082707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2085345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2087974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2090597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2093225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2096116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2098761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2101392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2104030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2106709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2109343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2112038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2114807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2117527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2120203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2122833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2125477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2128106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2130702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2133351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2135986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2138626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2141263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2143901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2146505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2149190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2151955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2154672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2157305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2159986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2162605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2165226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2167875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2170552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2173187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2175820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2178461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2181089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2183769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2186512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2189218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2191853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2194481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2197416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2200053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2202701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2205325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2207957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2210577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2213220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2215845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2218605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2221408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2224117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2226787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2229377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2232010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2234649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2237285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2239921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2242558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2245200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2247867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2250488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2253152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2255878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2258587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2261178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2262626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2264035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2265437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2266829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2268251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2269656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2271065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2272499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2273917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2275386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2276897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2278341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2279751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2281185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2282610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2284044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2285437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2286873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 59%] 2024-08-07T18:08:34.2288269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2289686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2291091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2292537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2293919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2295655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2297193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2298684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2300072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2301463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2302912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2304319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2305729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2307120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2308561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2310002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2311424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2312855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2314338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2315848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2317324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2318776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2320212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2321660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2323095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2324536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2325955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2327387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2328793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2330219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2331629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2333131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2334623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2336110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2337524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2338966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2340371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2341774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2343233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2344651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2346079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2347493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2348934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2350354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2351821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2353342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2355543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2356968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2358404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2359817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2361246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2362666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2364075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2365503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2366930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2368366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2369761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2371239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2372784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2374275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2375677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2377105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2378519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2379953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2381357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2382772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2384202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2385622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2387050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2388458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2389931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2391432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2392909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2394319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2396010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2397441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2398861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2400257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2401690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2403129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2404533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2405962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2407377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2408891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2410424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2411910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2413351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2414797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2416204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2417628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2419094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2420546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2421948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2423401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2424844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2426255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2427733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2429241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2430719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2432133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2433582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2434988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2436421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2437844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2439270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2440673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2442099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2443545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2444948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2446421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2447934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2449418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2450826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2452264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2453710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2455156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2456572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2458011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2459416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2460859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2462270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2463722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2465175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2466680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2468145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2469548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2470988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2472408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2473837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2475241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2476681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2478096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2479510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2480922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2482357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2483814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2485342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2486793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2488218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2489649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2491052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2492479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2493898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2495587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2497052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2498476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2499886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2501314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2502786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2504324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2505818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2507224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2508647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2510053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2511477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2512909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2514330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2515734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2517172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2518642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2520072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2521524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2523078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2524558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2525986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2527405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2528825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2530273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2531679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2533141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2534559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2535992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2537404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2538820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2540280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2541811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2543287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2544710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2546124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2547570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2548967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2550371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2551805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2553248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2554664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2556073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2557559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2559034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2560545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2561998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2563454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2564871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2566296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2567716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2569108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2570542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2571936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2573371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2574787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2576221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2577715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2579225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2580678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2582103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2583508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2584956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2586370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2587803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2589203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2590610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2592040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2593547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2594962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2596701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2598253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2599734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2601154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2602557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2603988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2605462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2606927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2608333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2609749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2611182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2612601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2614030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2615434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2616904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2618394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2619925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2621337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2622782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2624189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2625634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2627051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2628490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2629900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2631312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2632763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2634177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2635638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2637137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2638614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2640026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2641462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2642879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2644307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2645723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2647141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2648543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2649973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2651390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2652803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2654262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2655766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2657247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2658635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2660056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2661465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2662919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2664318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2665739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2667140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2668573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2669975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2671375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2672899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2674395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2675840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2677234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2678663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2680071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2681477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2682900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2684325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2685728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2687135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2688537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2689972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2691425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2692948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2694422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2696077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2697537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2698949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2700372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2701783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2703248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2704648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2706069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2707476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2708901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2710361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2711883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2713404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2714817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2716237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2717642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2719108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2720529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2721955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2723378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2724806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2726230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2727652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2729097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2730623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2732090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2733516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2734948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2736365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2737802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2739204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2740633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2742041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2743483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2744877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2746322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2747774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2749299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2750741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2752141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2753577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2754987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2756402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2757809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2759252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2760670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2762104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2763515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2764949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2766410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2767937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2769387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2770817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2772251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2773666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2775092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2776496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2777938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2779330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2780750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2782180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2783618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2785056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2786572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2788028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2789457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2790854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2792307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2793711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2795383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2796863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2798266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2799691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2801107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2802549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2804025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2805590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2807073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2808478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2809878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2811306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2812726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2814116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2815528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2816934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2818359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2819793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2821209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2822628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2824102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2825573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2827031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2828448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2829890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2831293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2832741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2834165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2835582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2837001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2838422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2839847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2841260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2842773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2844275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2845747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2847159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2848575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2849972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2851410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2852843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2854250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2855666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2857080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2858512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2859907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2861367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2862889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2864370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2865768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2867196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2868615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2870056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2871467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2872922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2874336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2875760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2877195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2878606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2880074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2881582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2883058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2884459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2885893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2887312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2888728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2890134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2891566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2892993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2894388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2896043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2897469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2898969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2900485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2901986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2903399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2904833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2906230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2907651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2909063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2910495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2911903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2913334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2914738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2916145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2917591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2919114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2920587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2921999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2923419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2924814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2926228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2927630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2929040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2930433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2931876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2933298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2934701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2936162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2937665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2939152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2940554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 60%] 2024-08-07T18:08:34.2941997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2943415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2944849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2946247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2947678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2949084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2950518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2951932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2953357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2954807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2956316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2957826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2959226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2960661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2962099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2963514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2964910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2966342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2967749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2969159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2970561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2972010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2973468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2974970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2976440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2977844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2979276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2980678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2982124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2983525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2984955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2986343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2987753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2989169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2990597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2992051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2993558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2995231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2996655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2998062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.2999462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3000889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3002335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3003737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3005139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3006566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3007983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3009399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3010798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3012325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3013859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3015325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3016745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3018151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3019613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3021009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3022452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3023860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3025289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3026685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3028106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3029503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3030970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3032472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3033934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3035342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3036762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3038179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3039577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3041014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3042453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3043870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3045276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3046704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3048115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3049568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3051073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3052558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3053964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3055358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3056769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3058177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3059611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3061004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3062432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3063830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3065260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3066649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3068099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3069591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3071062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3072461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3073898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3075301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3076709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3078127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3079527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3080939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3082358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3083773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3085169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3086634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3088115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3089563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3090947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3092374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3093778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3095409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3096874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3098281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3099700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3101084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3102504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3103910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3105412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3106922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3108409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3109816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3111268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3112677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3114097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3115503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3116921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3118341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3119782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3121220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3122629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3124086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3125572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3127035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3128438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3129845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3131251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3132677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3134088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3135481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3136896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3138307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3139746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3141173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3142670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3144196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3145698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3147094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3148516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3149932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3151389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3152789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3154276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3155680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3157088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3158502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3159907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3161339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3162805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3164302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3165747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3167168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3168581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3169990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3171409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3172845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3174257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3175651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3177079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3178493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3179926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3181391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3182908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3184383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3185817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3187220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3188644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3190046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3191501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3192891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3194300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3196017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3197452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3198870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3200333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3201905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3203383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3204791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3206190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3207616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3209076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3210525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3211987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3213715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3215155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3216569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3217966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3219480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3220986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3222471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3223864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3225284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3226686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3228064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3229466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3230872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3232322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3233701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3235118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3236514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3237971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3239443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3240897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3242329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3243763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3245161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3246560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3247990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3249408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3250831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3252256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3253692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3255108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3256564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3258104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3259580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3260980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3262411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3263816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3265241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3266650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3268052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3269477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3270894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3272340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3273733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3275184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3276682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3278158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3279553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3280967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3282400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3283842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3285235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3286646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3288079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3289493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3290915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3292342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3293806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3295651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3297160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3298551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3299977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3301393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3302819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3304218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3305626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3307054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3308445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3309868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3311273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3312710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3314160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3316320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3317780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3319251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3320653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3322072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3323487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3324923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3326316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3327710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3329125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3330526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3331922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3333351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3334878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3336328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3337737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3339131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3340549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3341956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3343372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3344757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3346159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3347586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3348983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3350404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3351879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3353410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3354861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3356283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3357697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3359139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3360537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3361967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3363373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3364791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3366180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3367581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3369005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3370458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3371973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3373421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3374851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3376287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3377681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3379075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3380490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3381916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3383328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3384716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3386129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3387557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3388997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3390510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3391984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3393413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3394810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3396598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3398014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3399432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3400820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3402252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3403654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3405083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3406474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3408031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3409569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3411045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3412467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3413860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3415285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3416694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3418100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3419539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3420977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3422413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3424022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3425444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3426885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3428371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3429918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3431385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3432811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3451215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3452686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3454103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3455516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3457419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3459004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3460448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3461854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3463262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3464784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3466295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3467928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3469463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3471406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3472839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3474267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3475683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3477100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3478489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3480068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3481624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3483029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3485416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3486936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3488420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3489806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3491220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3492649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3494073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3496064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3497509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3499600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3502132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3503746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3505190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3506711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3508285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3509755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3511457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3513017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3515227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3517615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3519933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3521514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3522924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3524331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3525952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3527685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3529657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3531483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3532949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3534346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3535762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3537153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3538566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3539981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3541407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3542792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3544195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3545634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3547064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3548498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3550006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3551485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3552889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3554300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3555703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3557127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3558536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3559974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3561372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3562786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3564192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3565597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3566981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3568441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3569976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3571412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3572821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3574228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3575657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3577045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3578467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3579904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3581342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3582750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3584185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3585601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3587086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3588635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3590121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3591560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3592994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3594428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3596143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3597600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3599014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3600451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3601851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3603290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3604707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3606224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3607752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3609252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3610687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3612094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3613521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3614937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3616426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3617843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3619270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3620710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 61%] 2024-08-07T18:08:34.3622155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3623565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3625036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3626543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3628036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3629446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3630878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3632316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3633731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3635155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3636561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3637996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3639422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3640856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3642265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3643741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3645271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3646741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3648145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3649587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3651002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3652395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3653815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3655237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3656680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3658076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3659512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3660935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3662412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3663899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3665378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3666783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3668220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3669624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3671019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3672439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3673857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3675287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3676685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3678111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3679544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3681004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3682492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3683977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3685398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3686835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3688248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3689711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3691138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3692554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3693987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3695655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3697123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3698540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3700067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3701600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3703103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3704501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3705921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3707343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3708789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3710222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3711653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3713077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3714493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3715959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3717376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3718878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3720432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3722053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3723489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3724930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3726364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3727818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3729279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3730800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3732475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3733911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3735348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3736768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3738250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3739744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3741238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3742646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3744080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3745487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3746911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3748322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3749761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3751236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3752666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3754079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3755497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3756909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3758348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3759885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3761353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3762772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3764185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3765618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3767037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3768461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3769864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3771305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3772720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3774113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3775523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3777040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3778630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3780090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3781521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3782937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3784378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3785762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3787191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3788613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3790048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3791475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3792907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3794322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3796090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3797651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3799131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3800576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3802008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3803438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3804852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3806290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3807703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3809109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3810518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3811963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3813379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3814825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3816388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3817865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3819302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3820727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3822150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3823561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3824994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3826394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3827810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3829226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3830690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3832099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3833565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3835076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3836546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3837971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3839395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3840839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3842256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3843674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3845074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3846556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3847963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3849379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3850799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3852271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3853783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3855236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3856656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3858069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3859505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3860923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3862349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3863770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3865209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3866609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3868038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3869457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3870967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3872460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3873936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3875338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3876752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3878163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3879554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3881004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3882418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3883837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3885232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3886664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3888077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3889581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3891101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3892582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3893991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3895661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3897102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3898517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3899952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3901382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3902809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3904238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3905718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3907125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3908630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3910162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3911694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3913308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3915718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3917881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3919846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3921942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3924141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3926070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3928457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3930240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3932693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3934914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3937236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3939603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3941663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3943568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3945440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3947202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3949171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3950875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3952740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3954181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3955613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3957019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3958438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3959880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3961388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3962840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3964290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3965688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3967105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3968506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3969944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3971335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3972725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3974174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3975596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3977012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3978452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3979978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3981479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3982898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3984329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3985765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3987183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3988607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3990009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3991410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3992838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3994251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3995970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3997483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.3999038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4000494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4001913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4003325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4004773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4006201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4007616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4009030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4010466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4011858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4013268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4014706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4016221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4017732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4019188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4020619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4022037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4023475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4024872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4026290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4027701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4029112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4030511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4031927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4033364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4034808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4036323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4037773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4039199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4040598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4042015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4043436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4044871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4046273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4047693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4049112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4050556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4052009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4053475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4054996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4056461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4057871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4059278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4060709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4062118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4063543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4064944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4066372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4067783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4069201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4070599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4072042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4073589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4075058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4076470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4077879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4079309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4080704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4082117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4083552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4084989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4086384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4087805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4089216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4090643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4092108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4093631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4095368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4096793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4098202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4099588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4101009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4102419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4103846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4105242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4106667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4108071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4109473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4110937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4112472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4113997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4115391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4116856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4118279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4119719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4121130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4122598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4124027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4125459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4126861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4128288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4129732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4131248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4132683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4134104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4135535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4136945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4138358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4139764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4141194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4142598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4144032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4145439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4146868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4148323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4149847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4151294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4152704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4154171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4155578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4157001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4158412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4159855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4161259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4162681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4164113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4165536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4166979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4168471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4169922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4171349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4172752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4174173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4175604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4177011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4178428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4179828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4181252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4182664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4184088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4185521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4187040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4188514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4189921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4191319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4192735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4194190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4195847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4197289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4198689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4200107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4201499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4202910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4204414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4205967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4207430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4208828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4210226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4211656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4213063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4214478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4215959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4217380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4218795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4220205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4221643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4223053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4224526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4226039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4227581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4228997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4230424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4231831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4233231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4234678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4236072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4237485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4238893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4240325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4241716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4243180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4244686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4246167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4247559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4248984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4250381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4251806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4253202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4254593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4256011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4257420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4258834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4260234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4261703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4263226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4264684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4266076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4267493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4268891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4270292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4271686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4273113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4274538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4275924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4277340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4278744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4280208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4281684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4283149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4284557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4285984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 62%] 2024-08-07T18:08:34.4287374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4288780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4290182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4291597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4293017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4294413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4296114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4297541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4299035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4300551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4302775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4304184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4305595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4306989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4308392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4309797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4311210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4312608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4314017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4315451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4316891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4318351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4319848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4321317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4322704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4324132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4325545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4326980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4328369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4329787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4331203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4332603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4334042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4335445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4336897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4338399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4339851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4341234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4342653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4344083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4345494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4346884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4348309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4349711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4351166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4352567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4354025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4355444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4356872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4358367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4359836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4361267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4362665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4364101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4365508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4366932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4368322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4369729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4371114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4372511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4373941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4375365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4376864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4378311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4379709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4381100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4382515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4383930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4385334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4386716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4388134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4389538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4390929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4392344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4393848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4395837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4397364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4398783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4400190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4401621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4403006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4404439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4405842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4407263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4408643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4410056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4411456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4412920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4414455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4415937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4417358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4418762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4420173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4421568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4423003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4424456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4425882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4427292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4428739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4430174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4431626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4433164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4434657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4436099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4437508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4438943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4440355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4441786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4443192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4444639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4446058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4447499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4448919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4450387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4451888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4453354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4454800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4456203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4457647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4459071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4460494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4461910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4463352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4464800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4466232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4467650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4469088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4470654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4472161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4473592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4475025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4476463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4477864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4479287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4480708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4482141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4483540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4484987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4486400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4487836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4489280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4490847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4492357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4493771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4495471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4496896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4498328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4499754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4501181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4502588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4504024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4505456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4506876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4508350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4509906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4511380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4512772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4514190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4515610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4517078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4518474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4519898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4521305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4522736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4524150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4525574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4527039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4528582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4530031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4531484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4532911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4534348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4535783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4537205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4538646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4540067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4541491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4542909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4544354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4545820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4547331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4548778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4550215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4551643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4553048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4554491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4555917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4557354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4558746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4560171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4561592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4563031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4564496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4566020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4567487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4568930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4570346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4571780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4573206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4574654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4576091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4577509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4578939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4580361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4581800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4583255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4584806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4586277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4587706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4589110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4590547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4591963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4593387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4594825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4596626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4598084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4599485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4600910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4602405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4603964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4605454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4606882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4608294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4609730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4611125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4612545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4613956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4615389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4616843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4618249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4619679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4621134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4622639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4624088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4625535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4626947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4628366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4629769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4631209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4632639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4634064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4635495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4636920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4638365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4639818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4641349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4642839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4644281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4645706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4647141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4648550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4649987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4651434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4652870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4654278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4655716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4657138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4658598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4660119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4661592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4663009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4664417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4665870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4667285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4668704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4670111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4671547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4672967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4674404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4675813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4677225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4678704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4680198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4681670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4683077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4684523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4685916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4687330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4688747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4690180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4691574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4692998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4694423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4696143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4697646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4699172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4700668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4702086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4703519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4704933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4706371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4707798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4709218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4710618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4712056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4713484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4714934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4716425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4717946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4719424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4720821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4722245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4723657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4725113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4726514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4727936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4729339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4730769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4732178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4733589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4735066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4736575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4738041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4739443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4740883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4742302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4743728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4745161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4746598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4748014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4749429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4750837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4752261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4753712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4755241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4756684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4758087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4759522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4760915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4762326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4763732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4765195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4766591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4768012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4769413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4770840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4772269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4773771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4775251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4776664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4778092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4779496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4780914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4782327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4783745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4785169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4786582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4787986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4789396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4790821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4792397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4793854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4795459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4796888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4798299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4799730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4801118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4802536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4803945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4805396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4806800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4808219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4809711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4811265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4812733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4814157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4815594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4817063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4818495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4819896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4821330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4822746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4824159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4825589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4827020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4828429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4829879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4831371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4832848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4834259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4835690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4837109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4838520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4839959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4841363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4842785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4844207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4845668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4847062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4848526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4850029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4851512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4852909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4854347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4855765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4857171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4858586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4859984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4861405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4862813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4864233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4865648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4867126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4868628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4870080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4871474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4872913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4874330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4875740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4877165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4878575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4880003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4881404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4882838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4884251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4885728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4887221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4888693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4890091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4891534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4892933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4894352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4896008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4897441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4898847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4900249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4901678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4903092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4904584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4906104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4907587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4908991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4910399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4911793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4913209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4914626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4916066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4917487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4918886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4920320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4921711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4923176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4924679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4926160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4927538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4928945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4930349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4931778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4933164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4934596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4936000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4937404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4938816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4940208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 63%] 2024-08-07T18:08:34.4941676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4943184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4944653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4946054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4947488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4948911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4950344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4951803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4953252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4954690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4956087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4957518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4958920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4960343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4961781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4963287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4964764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4966191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4967597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4969011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4970417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4971851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4973249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4974685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4976100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4977516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4978927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4980371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4981889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4983352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4984797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4986212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4987644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4989061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4990486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4991891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4993307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4994748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4996392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4997812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.4999289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5000849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5002313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5003737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5005177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5006605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5007994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5009418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5010823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5012242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5013692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5015135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5016578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5018032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5019535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5020977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5022427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5023845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5025281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5026676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5028093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5029492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5030887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5032274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5033694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5035138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5036569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5038068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5039512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5040940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5042327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5043736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5045161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5046592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5047995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5049422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5050833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5052275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5053675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5055145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5056676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5058140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5059559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5060969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5062393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5063790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5065213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5066609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5068035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5069444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5070860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5072257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5073720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5075259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5076693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5078107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5079509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5080944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5082332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5083751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5085183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5086611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5087999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5089419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5090828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5092333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5093816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5095531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5096959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5098359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5099773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5101167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5102586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5103998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5105419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5106809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5108233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5109634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5111038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5112499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5114046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5115527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5116958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5118387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5119789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5121219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5122618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5124040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5125455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5126882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5128277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5129695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5131130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5132637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5134063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5135462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5136882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5138288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5139693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5141090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5142515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5143917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5145333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5146728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5148151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5149610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5151107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5152549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5153972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5155391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5156781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5158193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5159598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5161035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5162427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5163859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5165270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5166691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5168115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5169610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5171052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5172477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5173890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5175329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5176749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5178140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5179555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5180952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5182367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5183797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5185229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5186636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5188143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5189597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5191004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5192396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5193831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5195537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5196952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5198368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5199759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5201182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5202567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5203987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5205382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5206869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5208372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5209861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5211255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5212683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5214091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5215487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5216956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5218373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5219785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5221190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5222619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5224031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5225495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5226982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5228457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5229870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5231291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5232689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5234109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5235529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5236913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5238314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5239717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5241155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5242542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5244002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5245497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5246971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5248359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5249774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5251222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5252634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5254063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5255459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5256878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5258292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5259716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5261112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5262574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5264095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5265558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5267025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5268449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5269842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5271239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5272628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5274045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5275469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5276875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5278293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5279708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5281174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5282676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5284153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5285549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5286973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5288371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5289783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5291182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5292600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5294024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5295711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5297154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5298567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5300063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5301588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5303070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5304476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5305879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5307269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5308689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5310089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5311483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5312892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5314298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5315724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5317150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5318556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5319995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5321509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5322938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5324357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5325765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5327186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5328572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5329991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5331390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5332790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5334202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5335603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5337022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5338453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5339940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5341370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5342775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5344187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5345589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5346968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5348374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5349771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5351148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5352551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5353965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5355396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5356827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5358342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5359790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5361216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5362615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5364037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5365435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5366863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5368264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5369661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5371076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5372481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5373893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5375318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5376830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5378320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5379715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5381105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5382523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5383931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5385341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5386749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5388148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5389572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5390964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5392370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5393779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5395586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5397139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5398625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5400025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5401457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5402852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5404270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5405672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5407090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5408468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5409855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5411267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5412666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5414115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5415593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5417092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5418496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5419903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5421292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5422704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5424114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5425514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5426893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5428284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5429715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5431102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5432559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5434053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5435522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5436909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5438321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5439708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5441111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5442484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5443894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5445288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5446683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5448083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5449486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5450952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5452440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5453900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5455281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5456706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5458117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5459527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5460919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5462344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5463768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5465157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5466580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5467989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5469458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5470939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5472394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5473800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5475218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5476608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5478011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5479406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5480831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5482232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5483647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5485049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5486454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5487853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5489284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5490786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5492246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5493662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5495281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5496718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5498117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5499525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5500913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5502319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5503748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5505141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5506547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5507999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5509523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5510965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5512360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5513754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5515169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5516579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5517980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5519373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5520788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5522166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5523550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5524976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5526417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5527895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5529326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5530737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5532136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5533550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5534951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5536363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5537766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5539172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5540557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5541952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5543361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5544789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5546288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5547726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5549132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5550507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5551969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5553365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5554788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5556169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5557571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5558968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5560389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5561770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5563150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5564628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5566126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5567571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5568960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5570383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5571790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5573191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5574596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5576007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5577393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5578785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5580173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5581570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5583022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5584508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5585950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5587339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 64%] 2024-08-07T18:08:34.5588768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5590149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5591544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5592936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5594370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5596359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5597821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5599223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5600625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5602134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5603654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5605152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5606550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5607952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5609345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5610751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5612136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5613533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5614930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5616379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5617783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5619161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5620559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5621993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5623496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5624937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5626337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5627739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5629163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5630544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5631955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5633353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5634796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5636189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5637582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5638993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5640436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5641952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5643396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5644826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5646223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5647618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5648996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5650410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5651808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5653206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5654616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5656018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5657432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5658853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5660345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5661802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5663236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5664666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5666091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5667506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5668942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5670350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5671777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5673194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5674661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5676069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5677519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5679070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5680536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5681948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5683350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5684800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5686210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5687624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5689034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5690467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5691875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5693293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5694716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5696674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5698268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5699733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5701156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5702577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5704023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5705450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5706880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5708306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5709747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5711145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5712579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5713985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5715490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5717030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5718487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5719912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5721340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5722758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5724160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5725602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5727018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5728430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5729830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5731258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5732668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5734069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5735519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5737023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5738501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5739898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5741321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5742729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5744162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5745564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5746982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5748376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5749806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5751191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5752593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5754035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5755561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5756994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5758385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5759807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5761213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5762616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5764024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5765460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5766870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5768282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5769690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5771122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5772583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5774101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5775568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5776989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5778432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5779832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5781259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5782666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5784106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5785497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5786905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5788318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5789751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5791185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5792693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5794163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5795917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5797338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5798742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5800161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5801572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5802996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5804416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5805845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5807270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5808686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5810166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5811722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5813200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5814640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5816098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5817511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5818983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5820374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5821798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5823209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5824658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5826062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5827481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5828925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5830440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5831887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5833294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5834735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5836213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5837612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5839001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5840431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5841840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5843248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5844675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5846106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5847564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5849077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5850523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5851985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5853388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5854824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5856211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5857605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5859034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5860419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5861823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5863227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5864683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5866071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5867524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5869027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5870506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5871899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5873319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5874758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5876194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5877601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5879009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5880437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5881861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5883281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5884704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5886170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5887669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5889130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5890523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5891956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5893367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5894808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5896484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5897907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5899340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5900754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5902530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5903962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5905505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5907021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5908495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5909902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5911337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5912729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5914149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5915591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5917062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5918461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5919861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5921291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5922696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5924142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5925660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5927124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5928526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5929938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5931713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5933177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5934648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5937423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5940060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5942707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5945367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5948007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5950724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5953450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5956842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5960143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5962778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5965415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5968066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5970694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5973334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5977511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5980244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5982866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5985475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5988182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5990930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5993620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.5997968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6000666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6003635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6006831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6009488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6012129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6015167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6018660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6021315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6023963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6026726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6029542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6032233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6035090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6038202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6040859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6043488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6046133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6048765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6051366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6053976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6056623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6059272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6061897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6064623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6068442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6071770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6074463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6077110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6079743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6082382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6085013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6087613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6090244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6092920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6095849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6098498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6101255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6104015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6106708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6109338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6111969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6114611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6117269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6119863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6122483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6125112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6127772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6130907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6134079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6146074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6148855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6151608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6154283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6156925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6159571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6162223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6164862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6167525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6170172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6172841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6175471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6178129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6180861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6183584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6186280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6188930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6191544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6194889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6198576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6201862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6205382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6208011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6210623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6213276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6215903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6218710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6221501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6224205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6226820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6229446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6232094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6234761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6237411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6240059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6242678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6245361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6248011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6250638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6253324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6256056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6259053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6261910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6265246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6268120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6271597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6274254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6276900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6279532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6282170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6284800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6287438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6290089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6292795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6295801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6298529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6301211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6303857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6306494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6309154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6311794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6314440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6317117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6319758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6322715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6325752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6329499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6332500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6335469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6340872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6343550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6346182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6348804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6351432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6354053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6356684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6359319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6361945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6364534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6367250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6369998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6372669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6375301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6377935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6380578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6383185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6386105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6388781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6391687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6396362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6399031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6401839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6408879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6413227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6415940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6418622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6421278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6423875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6426478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6429119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6431779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6434413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6437067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6439708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6442360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6445050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6447779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6451105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6453782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6456652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6459528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6465992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6469842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6472498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6475101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6477710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6480363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6483013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6485632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6488338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6491109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6494538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6497515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6500159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6502833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6505470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6508103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6510741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6513378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6516022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6518705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6521365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6524150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6526924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6529603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6532249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6534896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6537519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6540161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6542794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6545423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6548050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 65%] 2024-08-07T18:08:34.6550680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6553314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6555960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6558673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6561369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6564043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6566680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6569305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6571932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6574557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6577205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6579833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6582444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6585099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6587745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6590374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6593021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6596053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6598763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6601378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6603983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6606599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6609219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6611836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6614466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6617133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6619788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6622416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6625020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6627747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6630508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6633220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6635841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6638485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6641151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6643767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6646446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6649113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6651799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6654449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6657083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6659709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6662419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6665147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6667807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6670462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6673140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6675755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6678380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6681017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6683664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6686304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6688946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6691565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6694199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6697456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6700203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6702961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6705668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6708325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6710933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6713566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6716215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6718909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6721533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6724150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6726805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6729417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6732019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6734698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6737420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6740096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6742682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6745328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6747968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6750581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6753261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6755884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6762768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6765510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6768156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6770866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6773595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6776371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6779028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6781661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6784317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6786982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6789607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6792264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6794871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6797832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6800442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6803064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6805724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6808436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6811147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6813827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6816499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6819142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6821758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6824414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6827076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6829687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6832299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6834927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6837583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6840215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6842905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6845615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6848308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6850940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6853587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6856212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6858842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6861457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6864065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6866688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6869323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6871946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6874557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6877154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6879882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6882626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6885320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6887955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6890592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6893190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6896031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6898698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6901333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6903951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6906562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6909167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6911797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6914522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6917383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6920082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6922695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6925280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6927877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6930477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6933110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6935746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6938341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6940945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6943564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6946178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6948835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6951544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6954247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6956864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6959462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6962098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6964731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6967365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6969990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6972644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6975292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6977924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6980544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6983210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6985927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6988581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6991182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6993798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6996822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.6999447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7002074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7004695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7007307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7009967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7012583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7015208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7017896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7020605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7023352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7026052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7028728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7031370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7034006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7036666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7039302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7041944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7044572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7047223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7049865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7052530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7055162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7057876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7060558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7063200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7065849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7068474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7071109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7073736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7076346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7078972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7081607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7084266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7086883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7089550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7092290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7094972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7098070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7100729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7103402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7106021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7108644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7111292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7113929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7116587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7119214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7121841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7124583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7127341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7130039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7132749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7135410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7138019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7140661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7143282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7145930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7148553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7151188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7153805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7156443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7159168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7161880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7164595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7167232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7169837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7172471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7175075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7177732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7180347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7182944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7185589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7188221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7190839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7193442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7200147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7202930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7205607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7208195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7210825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7213468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7216114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7218816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7221451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7224104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7226743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7229374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7232020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7234723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7237479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7240139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7242775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7245417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7248043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7250680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7253313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7255958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7258584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7261201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7263832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7266479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7269149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7271835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7274559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7277236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7279880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7282494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7285139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7287768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7290410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7293074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7296026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7298713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7301352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7304039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7306790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7309489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7312150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7314762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7317439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7320068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7322689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7324103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7325502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7326968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7328340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7329753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7331194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7332701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7334124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7335518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7336936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7338345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7339757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7341151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7342573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7343975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7345376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7346792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7348200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7349586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7351019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7352567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7354002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7355396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7356803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7358209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7359593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7361011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7362394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7363791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7365188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7366641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7368379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7371068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7372594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7375288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7376743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7378155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7379601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7383172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7384629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7386059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7387453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7388853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7390258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7391671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7393156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7394656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7396369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7397772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7399177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7400601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7402004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7403415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7404820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7406241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7407621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7409036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7410433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7411951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7413459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7414931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7416322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7417785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7419179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7420566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7421994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7423394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7424788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7426168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7427580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7428976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7430368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7431809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7433308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7435186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7436632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7438030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7439426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7440848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7442256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7443668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7445065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7446490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7447880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7449293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7450762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7452312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7453747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7455157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7456549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7457941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7459399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7460783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7462208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7463613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7465011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7466396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7467801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7469246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7470735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7472172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7473589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7475000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7476396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7477789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7479189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7480620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7482013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7483423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7484826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7486246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7487694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7489217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7490660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7492077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7493454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7494848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7496552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7497957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7499353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7500734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7502155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7503561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7504959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7506334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7507828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7509351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7510811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7512215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7513638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7515035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7516413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7517867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7519261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7520674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7522066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7523476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7524862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7526322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7527789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7529230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7530613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7532029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7533415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7534788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7536192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7537585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7538980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7540357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7541771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7543178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7544616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7546120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7547589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7548992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7550399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7551792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7553203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7554621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7556014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7557427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7558812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7560228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7561601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7563053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 66%] 2024-08-07T18:08:34.7564535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7565999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7567376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7568777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7570174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7571572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7572978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7574363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7575812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7577248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7578685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7580101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7581543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7583031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7584561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7586044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7587502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7588933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7590371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7591797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7593230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7594676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7596348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7597802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7599235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7600684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7602187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7603749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7605233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7606677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7608093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7609528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7610956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7612434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7613854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7615276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7616774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7618216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7619650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7621113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7622676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7624155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7625659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7627085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7628524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7629953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7631407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7632846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7634275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7635730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7637145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7638582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7640053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7641593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7643078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7644516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7645942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7647391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7648799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7650235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7651715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7653191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7654609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7656028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7657469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7658938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7660469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7661932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7663385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7664810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7666233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7667639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7669080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7670501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7671925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7673357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7674792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7676238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7677685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7679202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7680682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7682152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7683574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7685012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7686443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7687901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7689319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7690761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7692211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7693675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7695335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7696871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7698415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7699901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7701323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7702763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7704209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7705640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7707074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7708490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7709929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7711361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7712818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7714233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7715720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7717279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7718745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7720180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7721621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7723103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7724519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7725965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7727390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7728843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7730268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7734156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7735651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7738856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7740428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7741915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7743336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7744769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7746207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7747642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7749091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7750522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7751951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7753372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7754815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7756233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7757712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7759214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7760715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7762145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7763586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7765007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7766435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7767897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7769315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7770754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7772178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7773628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7775023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7776497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7778012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7779504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7780912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7782345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7783763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7785207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7786630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7788043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7789493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7790934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7792373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7793791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7795615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7798830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7800358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7801781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7803240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7804675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7806114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7807539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7808987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7810433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7811840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7813272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7814696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7816209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7817760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7819269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7820693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7822146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7823571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7824997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7826419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7827864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7829297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7830717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7832158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7833587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7835060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7836565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7838060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7839518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7840957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7842369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7843807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7845233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7846657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7848062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7849525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7850950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7852348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7853841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7856113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7857595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7859005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7860454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7861881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7863325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7864739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7866174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7867599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7869070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7870496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7871920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7873410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7874931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7876424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7877845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7879295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7880719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7882142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7883560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7884997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7886422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7887849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7889287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7890724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7892192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7893688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7895442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7896885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7898334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7899771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7901208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7902643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7904088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7905496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7906932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7908362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7909835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7911325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7912896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7914368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7915790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7917252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7918663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7920128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7921562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7922986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7924392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7925824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7927250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7928678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7930155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7931690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7933149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7934567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7935983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7937403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7938845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7940282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7941712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7943128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7944568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7945977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7947409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7948860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7950416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7951909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7953371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7954789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7956209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7957619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7959021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7960472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7961885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7963304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7964714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7966160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7967628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7969143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7970615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7972053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7973486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7974917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7976330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7977760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7979217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7980634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7982073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7983489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7984921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7986363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7987875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7989343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7990785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7992194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7993619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7995295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7996727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7998153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.7999576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8001020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8002449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8003873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8005356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8006919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8008411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8009862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8011282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8012723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8014145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8015578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8017040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8018466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8019925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8021327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8022751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8024214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8025756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8027250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8028674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8030127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8031576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8032974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8034407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8035839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8037283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8038691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8040132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8041577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8043041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8044558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8046027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8047470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8048903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8050355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8051764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8053192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8054617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8056038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8057438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8058876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8060322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8061765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8063280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8064737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8066174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8067580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8069009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8070431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8071865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8073267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8074689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8076102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8079080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8080510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8082185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8083758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8085235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8086652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8088064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8089509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8090933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8092346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8093745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8095444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8096869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8098291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8099692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8101220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8102763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8104222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8105635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8107053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8108496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8109902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8111340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8112769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8114215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8115647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8117113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8118550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8120045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8121563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8123032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8124460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8125881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8127308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8128712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8130147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8131594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8133021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8134429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8135866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8137291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8138754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8140256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8142005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8143454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8144861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8146301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8147728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8149179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8150616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8152061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8153497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8154946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8156362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8157854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8159363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8160870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8162291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8163712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8165136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8166562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8167986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8169406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8170848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8172284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8173713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8175119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8176589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8178055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8179566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8181015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8182467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8183897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8185306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8186734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8188158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8189597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8190998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8192455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8193865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8195573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8197068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8198607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8200084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8201526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8202929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8204350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8205759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8207184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8208594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8209995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8211448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8212870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8214299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8215751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8217332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8218809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8220237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8221674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8223115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8224539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8226002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8227419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8228831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 67%] 2024-08-07T18:08:34.8230268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8231690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8233107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8234561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8236101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8237552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8238975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8240397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8241856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8243265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8244702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8246116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8247530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8248944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8250358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8251873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8253346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8254854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8256309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8257739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8259167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8260587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8262018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8263450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8264863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8266270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8267666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8269089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8270526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8271987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8273496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8274954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8276389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8277792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8279214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8280628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8282095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8283493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8284915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8286335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8287765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8289191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8290641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8292186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8293659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8295330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8296762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8298192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8299597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8301022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8302427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8303847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8305254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8306675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8308072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8309553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8311116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8312591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8314010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8315431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8316910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8318314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8319734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8321155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8322613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8324015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8325448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8326866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8328353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8329885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8331366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8332796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8334213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8335623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8337013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8338440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8339858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8341284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8342682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8344115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8345534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8346991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8348476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8349930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8351375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8352775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8354187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8355601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8357036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8358441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8359866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8361296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8362739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8364139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8365605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8367098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8368570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8369953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8371367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8372798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8374211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8375631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8377030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8378455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8379862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8381294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8382690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8384176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8385681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8387149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8388548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8389966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8391436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8392856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8394285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8395942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8397398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8398805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8400238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8401671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8403180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8404694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8406472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8407890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8409334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8410748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8412178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8413608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8415035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8416456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8418087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8419556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8420991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8422447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8423923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8425477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8426962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8428397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8429833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8431271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8432727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8434149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8435604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8437016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8438463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8439868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8441289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8442767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8444313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8445799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8447227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8448654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8450107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8451517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8452936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8454388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8455814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8457259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8458676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8460119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8461606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8463134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8464597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8466043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8467479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8468920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8470342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8471805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8473234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8474642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8476081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8477507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8478948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8480398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8481942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8483408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8484854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8486275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8487707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8489129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8490576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8492007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8493441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8494869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8496574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8498006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8499505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8501066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8502585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8504018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8505482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8506917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8508337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8509759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8511164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8512631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8514083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8515485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8516939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8518406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8519952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8521408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8522856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8524290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8525742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8527152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8528599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8530033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8531483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8532919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8534363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8535787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8537259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8538788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8540263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8541704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8543140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8544590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8545981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8547428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8548857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8550289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8551769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8553225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8554648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8556121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8557629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8559104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8560555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8561995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8563436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8564870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8566333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8567754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8569197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8570636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8572120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8573539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8575020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8576602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8578090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8579495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8580914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8582383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8583817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8585257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8586674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8588119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8589554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8591002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8592428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8593910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8595765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8597279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8598694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8600149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8601582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8603021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8604459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8605886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8607333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8608754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8610193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8611606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8613129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8614634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8616108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8617564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8619017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8620429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8621841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8623300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8624730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8626151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8627569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8629023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8630453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8631933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8633491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8634994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8636432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8637878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8639303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8640764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8642221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8643647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8645090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8646522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8647979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8649427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8650902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8652438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8653934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8655355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8656796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8658222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8659672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8661094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8662554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8663983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8665417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8666848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8668261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8669760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8671277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8672782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8674201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8675651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8677083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8678511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8679933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8681373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8682817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8684246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8685653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8687068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8688554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8690050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8691518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8692965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8694414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8696191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8697655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8699084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8700533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8701943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8703399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8704822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8706247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8707783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8709340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8710850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8712279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8713734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8715156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8716592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8718054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8719482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8720884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8722317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8723759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8725181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8726642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8728154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8729661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8731070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8732514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8733941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8735390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8736806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8738240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8739659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8741105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8742535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8743975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8745445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8746982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8748445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8749862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8751303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8752747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8754174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8755587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8757022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8758440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8759862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8761277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8762731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8764199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8765718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8767173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8768596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8770044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8771451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8772906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8774338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8775786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8777194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8778633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8780053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8781499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8782980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8784506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8785970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8787405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8788808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8790213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8791645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8793095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8794521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8796342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8797798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8799224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8800646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8802125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8803720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8805793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8807220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8808640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8810065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8811511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8812921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8814386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8815814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8817311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8818732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8820172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8821636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8823172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8824637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8826097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8827523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8828965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8830371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8831778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8833220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8834663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8836087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8837499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8838946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8840409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8841924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8843379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8844821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8846252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8847684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8849103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8850534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8852080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8853510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8854944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8856365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8857797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8859239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8860748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8862210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8863663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8865071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8866506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8867915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8869370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8870761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8872251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8873701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8875138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8876566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8878025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8879565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8881037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8882466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8883941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8885384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8886807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8888236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 68%] 2024-08-07T18:08:34.8889688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8891126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8892546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8893981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8895774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8897334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8898915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8900389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8901820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8903247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8904716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8906113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8907543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8908960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8910399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8911798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8913233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8914675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8916139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8917697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8919169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8920593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8922017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8923448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8924884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8926309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8927725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8929143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8930536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8931973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8933389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8934893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8936415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8937875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8939302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8940705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8942135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8943565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8945018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8946432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8947863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8949284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8950734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8952147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8953636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8955195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8956676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8958103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8959520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8960959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8962378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8963802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8972916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8974520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8975960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8977397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8978807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8980348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8981923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8983395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8984798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8986235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8987667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8989095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8990506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8991971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8993403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8994817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8996588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8998016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.8999601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9001160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9002686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9004100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9005533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9006943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9008372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9009789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9011239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0004s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9012666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9014075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9015516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9016986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9018410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9019866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9021417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9022926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9024352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9025758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9027192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9028611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9030105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9031511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9032968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9034394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9035803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9037225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9038675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9040186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9041627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9043088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9044508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9045939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9047343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9048769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9050178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9051611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9053029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9054463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9055880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9057351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9058905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9060398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9061847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9063299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9064733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9066144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9067587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9069014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9070440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9071856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9073293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9074704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9076170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9077684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9079149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9080588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9082023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9083453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9084882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9086309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9087710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9089129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9090548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9091993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9093388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9094852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9096706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9098224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9099660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9101076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9102532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9103949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9105379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9106787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9108234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9109648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9111067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9112486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9113976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9115494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9116999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9118415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9119860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9121295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9122719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9124159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9125562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9126995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9128391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9129820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9131234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9132741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9134236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9135704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9137115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9138539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9139966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9141373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9142825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9144229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9145632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9147025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9148459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9149872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9151381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9152896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9154374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9155789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9157206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9158610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9160016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9161452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9162883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9164298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9165740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9167185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9168588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9170056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9171569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9173073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9174471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9175900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9177314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9178710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9180122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9181526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9182973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9184387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9185810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9187215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9188683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9190191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9191646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9193065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9194496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9196185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9197606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9199008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9200434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9201876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9203289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9204744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9206164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9207678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9209207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9210692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9212093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9213526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9214931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9216351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9217806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9219237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9220656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9222076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9223514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9224932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9226394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9227884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9229412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9230825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9232253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9233668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9235111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9236536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9237970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9239386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9240811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9242271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9243680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9245180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9246672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9248151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9249543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9250964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9252400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9253837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9255237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9256661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9258547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9259970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9261394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9262818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9264248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9265725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9267241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9268696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9270129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9271550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9272993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9274398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9275847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9277249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9278666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9280076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9281505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9282959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9284398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9285889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9287344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9288771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9290171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9291590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9293019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9294457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9296130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9297563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9298972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9300391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9301803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9303313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9304889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9306370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9307789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9309207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9310644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9312059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9313517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9314915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9316332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9317775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9319199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9320621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9322079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9323620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9325060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9326480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9327883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9329313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9330703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9332124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9333546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9334964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9336353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9337772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9339183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9340631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9342133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9343602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9345054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9346472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9347884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9349274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9350693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9352098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9353519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9354907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9356330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9357737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9359184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9360667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9362114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9363552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9364947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9366364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9367771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9369203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9370607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9372028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9373458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9374899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9376300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9377765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9379284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9380757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9382180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9383599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9385032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9386441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9387850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9389249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9390677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9392085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9393517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9394914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9396705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9398240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9399714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9401107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9402515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9403972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9405373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9406795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9408233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9409657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9411056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9412503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9413935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9415366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9416808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9418361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9419815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9421224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9422655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9424056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9425513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9426933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9428348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9429746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9431173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9432612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9434006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9435441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9436982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9438439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9439854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9441252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9442670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9444101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9445500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9446917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9448318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9449744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9451140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9452636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9454078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9455603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9457029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9458435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9459844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9461276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9462698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9464109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9465520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9466917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9468325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9469718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9471151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9472623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9474139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9475592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9477025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9478449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9479873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9481275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9482705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9484153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9485580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9487007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9488423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9489852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9491286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9492808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9494263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9495972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9497370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9498795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9500198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9501607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9503048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9504451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9505876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9507299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9508708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9510179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9511741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9513230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9514638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9516037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9517505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9518924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9520330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9521730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9523147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9524565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9525954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9527364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9528817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9530377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9531817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9533245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9534650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9536067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9537457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9538865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9540298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9541715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9543147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9544546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9545969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9547377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9548833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9550320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 69%] 2024-08-07T18:08:34.9551790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9553227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9554641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9556042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9557461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9558892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9560277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9561689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9563119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9564541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9565923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9567379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9568897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9570350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9571759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9573198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9574605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9576058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9577453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9578874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9580277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9581684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9583123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9584527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9585992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9587487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9588960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9590361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9591781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9593207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9594607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9596257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9597700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9599110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9600502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9601923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9603353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9604888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9606404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9607883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9609285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9610722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9612118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9613551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9614955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9616400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9617833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9619241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9620676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9622089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9623563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9625048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9626507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9627905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9629311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9630711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9632150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9633570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9634985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9636376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9637791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9639204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9640592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9642001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9643474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9644994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9646440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9647860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9649281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9650714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9652109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9653554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9654961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9656392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9657798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9659201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9660617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9662060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9663590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9665046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9666452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9667869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9669283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9670705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9672134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9673557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9674964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9676355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9677781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9679224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9680670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9682191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9683677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9685127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9686541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9687973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9689390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9690842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9692252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9693692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9695346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9696819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9698224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9699712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9701271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9702771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9704193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9705605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9707040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9708486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9709919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9711325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9712762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9714191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9715619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9717031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9718559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9720080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9721539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9722986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9724420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9725872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9727278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9728767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9730178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9731608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9733029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9734455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9735870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9737347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9738855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9740329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9741748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9743194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9744624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9746032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9747471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9748885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9750299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9751749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9753219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9754636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9756099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9757596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9759080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9760500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9761908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9763360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9764758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9766189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9767589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9769001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9770407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9771847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9773270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9774727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9776956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9778437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9779829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9781257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9782676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9784110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9785531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9786957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9788391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9789818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9791282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9792699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9794206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9796078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9797588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9798996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9800435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9801848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9803240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9804675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9806096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9807535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9808937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9810366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9811782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9813295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9814816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9816285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9817736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9819184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9820590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9822012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9823458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9824890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9826320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9827741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9829177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9830606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9832080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9833628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9835108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9836522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9837944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9839344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9840770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9842188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9843647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9845051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9846465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9847908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9849312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9850779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9852286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9853789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9855193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9856624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9858037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9859466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9860871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9862308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9863751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9865201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9866638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9868050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9869530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9871035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9872508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9873930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9875365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9876786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9878200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9879597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9881028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9882442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9883880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9885287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9886712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9888197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9889692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9891169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9892585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9894056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9895713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9897186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9898602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9900059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9901467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9902929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9904376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9905810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9907281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9908806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9910306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9911727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9913166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9914599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9916045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9917509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9918943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9920350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9921785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9923208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9924645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9926139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9927680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9929176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9930578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9932005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9933442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9934889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9936294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9937729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9939137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9940567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9941967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9943397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9944806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9946285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9947797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9949248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9950675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9952097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9953528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9954938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9956380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9957802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9959225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9960637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9962074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9963513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9964976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9966479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9967960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9969384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9970800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9972233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9973672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9975114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9976511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9977943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9979367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9980805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9982198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9983690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9985192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9986680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9988078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9989492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9990929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9992355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9993834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9995541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9996998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9998419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:34.9999848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0001252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0002758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0004324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0005811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0007217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0008645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0010055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0011444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0012861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0014293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0015720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0017120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0018596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0020009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0021489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0022988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0024483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0025886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0027346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0028744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0030142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0031577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0032986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0034419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0035822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0037268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0038681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0040144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0041684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0043149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0044572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0045979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0047368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0048786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0050194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0051641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0053063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0054482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0055914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0057312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0058761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0060267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0061749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0063142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0064582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0065998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0067436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0068842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0070257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0071693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0073108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0074538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0075946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0077410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0078904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0080381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0081781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0083230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0084643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0086073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0087473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0088906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0090322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0091730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0093155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0094572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0096340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0097882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0099405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0100818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0102256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0103683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0105114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0106551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0108004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0109411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0110820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0112257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0113698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0115161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0116647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0118166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0119587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0120999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0122406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0123863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0125280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0126704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0128109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0129540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0130968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0132368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0133895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0135410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0136894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0138289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0139718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0141143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0142583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0144013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0145450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0146856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0148288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0149685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0151103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0152553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0154075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0155550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0156948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0158375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0159825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0161248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0162648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0164107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0165518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0166940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0168337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0169767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0171176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0172617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0174153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0175607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0177038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0178444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0179869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0181274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0182708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0184119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0185522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0186933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0188360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0189749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0191208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0192698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0194172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0195831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0197249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0198685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0200101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0201528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0202934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0204388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0205813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 70%] 2024-08-07T18:08:35.0207268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0208677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0210181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0211751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0213241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0214686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0216102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0217578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0218978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0220405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0221822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0223255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0224660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0226091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0227504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0228988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0230493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0231964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0233398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0234829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0236258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0237667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0239114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0240543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0241967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0243388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0244838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0246265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0247739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0249242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0250718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0252133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0253557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0254981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0256398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0257843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0259254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0260679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0262087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0263552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0264955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0266428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0267999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0269478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0270872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0272299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0273765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0275178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0276593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0278003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0279436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0280874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0282301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0283727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0285193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0286690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0288146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0289538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0290964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0292372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0293785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0295530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0296959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0298384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0299771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0301189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0302603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0304160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0305705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0307205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0308643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0310089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0311498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0312923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0314366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0315791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0317214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0318669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0320102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0321537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0323003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0324518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0325997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0327407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0328847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0330259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0331688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0333101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0334546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0335953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0337361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0338799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0340200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0341663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0343169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0344670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0346064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0347485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0348899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0350333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0351789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0353228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0354656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0356061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0357477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0358875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0360295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0361785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0363287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0364746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0366161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0367575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0368986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0370385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0371818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0373233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0374697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0376103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0377516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0378953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0380399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0381940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0383451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0384898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0386307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0387730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0389129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0390555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0391940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0393344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0394752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0396416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0397837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0399311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0400863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0402340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0403777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0405181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0406609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0408019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0409437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0410836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0412263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0413690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0415107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0416512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0418014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0419541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0421038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0422460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0423880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0425312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0426698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0428112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0429517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0430931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0432319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0433753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0435160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0436613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0438113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0439576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0440996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0442408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0443844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0445230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0446650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0448065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0449488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0450873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0452305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0453734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0455166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0456668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0458115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0459529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0460916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0462323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0463740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0465166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0466559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0467998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0469404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0470829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0472219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0473701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0475205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0476662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0478070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0479468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0480923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0482340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0483805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0485195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0486626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0488040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0489470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0490868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0492335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0493847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0495509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0496944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0498354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0499791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0501184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0502602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0504039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0505470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0506860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0508276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0509689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0511117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0512589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0514165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0515644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0517066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0518529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0519939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0521370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0522790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0524235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0525636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0527067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0528473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0529883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0531320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0532834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0534315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0535705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0537125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0538533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0540123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0541536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0542955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0544382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0545812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0547204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0548625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0550080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0551604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0553047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0554489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0555908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0557323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0558739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0560147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0561566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0562968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0564392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0565795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0567220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0568668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0570175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0571618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0573039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0574471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0575866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0577280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0578692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0580126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0581524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0582936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0584367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0585797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0587226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0588739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0590193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0591616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0593019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0594449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0596104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0597517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0598929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0600317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0601733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0603141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0604566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0606028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0607566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0609037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0610441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0611832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0613274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0614702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0616097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0617566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0618987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0620424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0621831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0623257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0624716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0626238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0627680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0629098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0630503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0631939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0633347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0634758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0636171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0637583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0638999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0640404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0641832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0643249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0644713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0646197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0647674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0649083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0650509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0651957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0653403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0654831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0656234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0657655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0659073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0660510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0661905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0663399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0664917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0666377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0667760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0669184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0670597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0672017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0673433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0674852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0676257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0677668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0679090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0680485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0681942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0683461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0684921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0686311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0687745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0689183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0690571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0691973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0693419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0694825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0696458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0697888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0699290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0700775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0702280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0703774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0705719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0707186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0708584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0710030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0711477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0712916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0714329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0715741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0717158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0718638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0720107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0721644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0723894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0725344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0726807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0728218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0729652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0731073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0732496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0733901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0735352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0736768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0738180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0739652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0741167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0742623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0744018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0745460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0746891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0748313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0749709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0751150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0752542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0753966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0755435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0756849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0758296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0759820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0761265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0762657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0764080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0765520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0766933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0768327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0769757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0771161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0772563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0773957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0775400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0776796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0778245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0779740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0781167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0782587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0783979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0785413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0787086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0788553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0789946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0791359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0792769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0794195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0795866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0797395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0798944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0800447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0801838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0803246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0804667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0806084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0807490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0808874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0810284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0811691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0813102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0814492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0815974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0817787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0819302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0820723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0822168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0823616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0825009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0826451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0827864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0829291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0830688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0832109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0833516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0834992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0836497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0837964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0839354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0840780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0842226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0843606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0845029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0846462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0847867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0849255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0850686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0852088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0853535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0855040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0856519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0857916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0859329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 71%] 2024-08-07T18:08:35.0860725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0862117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0863548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0864960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0866386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0867783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0869214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0870602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0872059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0873561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0875042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0876421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0877827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0879226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0880650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0882036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0883431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0884860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0886263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0887672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0889436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0891611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0894311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0897836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0900799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0903757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0905788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0908157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0911384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0913717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0915193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0916613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0918086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0919491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0920914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0922401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0923945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0925396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0926842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0928244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0929655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0931056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0932470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0933877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0935268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0936735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0938179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0939611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0941072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0942612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0944095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0945526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0946973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0948425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0949858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0951296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0952775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0954192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0955631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0957065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0958495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0959975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0961517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0962978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0964411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0965840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0967312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0968722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0970180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0971607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0973059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0974471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0975897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0977365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0978841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0980370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0981843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0983284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0984727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0986162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0987600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0989046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0990470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0991893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0993305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0994732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0996515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0998033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.0999585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1001076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1002521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1003930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1005362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1006773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1008219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1009628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1011056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1012473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1013950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1015360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1016835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1018411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1019907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1021329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1022744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1024178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1025592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1028294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1030962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1033622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1036294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1038959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1041605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1044315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1047168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1049872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1052514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1055891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1058762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1061435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1064116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1066819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1069511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1072200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1074862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1078082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1080881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1083669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1086409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1089078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1091780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1094415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1098219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1100937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1103622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1106310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1108962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1111612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1114282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1117043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1119898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1123215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1125925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1128583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1131244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1133906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1136592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1139283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1142298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1144972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1147696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1150374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1153108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1155891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1158609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1161259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1163908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1166586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1169260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1171911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1174587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1177256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1183891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1186650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1189315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1192049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1194822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1197831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1200456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1203119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1205825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1208493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1211138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1213802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1216462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1219186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1221855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1224528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1227260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1230034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1232740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1235369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1238036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1240695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1243331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1249272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1252025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1254692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1257331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1260007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1262721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1265478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1268249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1270965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1273653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1276328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1279003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1281730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1284413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1287074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1289737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1292454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1295432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1298124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1300886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1303706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1306433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1309124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1311780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1314459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1317146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1319887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1322540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1325249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1327946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1330613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1333268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1335949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1338679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1341438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1344140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1346807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1349493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1352155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1354803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1357476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1360135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1362817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1365461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1368095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1370757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1373453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1376215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1378922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1381614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1384250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1386889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1389557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1392237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1394915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1397839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1400488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1403162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1405811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1408554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1411343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1414103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1416743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1419418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1422076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1424732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1427379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1430010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1432668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1435343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1437989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1440622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1443339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1446091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1448785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1451406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1454064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1456732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1459383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1462060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1464715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1467394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1470061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1472723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1475391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1478114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1480872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1483574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1486218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1488914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1491616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1494252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1497171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1499841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1502466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1505127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1507788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1510455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1513172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1515968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1518706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1521365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1524040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1526669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1529319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1531987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1534617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1537255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1539906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1542628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1545275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1547959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1550686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1553455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1556090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1558727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1561397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1564064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1566697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1569325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1571966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1574617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1577265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1579921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1582624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1585371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1588070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1590712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1593379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1596528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1599245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1601878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1604556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1607242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1609899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1612557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1615215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1621066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1623880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1626608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1631930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1634655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1637318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1639943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1642598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1645271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1647928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1650580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1653235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1655928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1658638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1662171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1664899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1667558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1678033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1680918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1683603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1686276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1688940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1691580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1694224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1697201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1699847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1702587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1705427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1708177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1710812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1713457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1716108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1718799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1721450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1724104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1726752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1729423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1732044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1734679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1737380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1740145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1742867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1745507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1748152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1750815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1753466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1756120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1758749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1761440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1764076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1766706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1769333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1771976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1774645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1777370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1780103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1782755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1785437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1788101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1790743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1793396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1796345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1799020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1801655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1804314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1806969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1809691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1812481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1815226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1817941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1820574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1823217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1825857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1828494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1831113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1833747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1836395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1839037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1841676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1844365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1847130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1849811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1852470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1855148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1857818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1860479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1863133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1865794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1868459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1871112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1873785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1876443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1879166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1881916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1884595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1887268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1889929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1892580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1895500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1898159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1900813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1903434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1906072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1908723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1911387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1914121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1916892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1919638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1922329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1924987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1927651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1930315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1933412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1936087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1938732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1941408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1944085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1946757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1949491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1952225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1954917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1957558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1960198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1962843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1965516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1968165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1970805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1973452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1976124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1978773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1981404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1984132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 72%] 2024-08-07T18:08:35.1986862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.1989548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.1992172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.1997993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2000728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2003367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2008190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2010854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2013515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2016161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2018853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2021493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2024233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2027053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2029742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2032381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2035024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2037660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2040269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2042893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2045574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2048208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2050830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2053507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2056155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2058846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2063471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2066218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2068886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2071544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2074211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2076829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2079502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2082190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2084867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2087519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2090196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2092813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2095898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2098704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2101442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2104102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2106733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2109382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2112036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2114674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2117326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2120025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2122695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2125316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2127941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2130582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2133535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2136373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2139066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2141737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2144406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2147054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2149689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2152396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2155051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2157680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2160307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2162963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2165605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2168317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2171064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2173812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2176436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2179095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2181717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2184355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2187019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2189683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2192295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2194947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2197942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2200589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2203306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2206105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2208809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2211452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2214094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2216757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2219446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2222077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2224707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2227317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2229954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2232592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2235212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2237895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2240652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2243329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2245946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2248585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2251267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2253913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2256541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2259172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2261828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2264469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2267113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2269781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2272484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2275198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2277895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2280537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2283178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2285808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2288431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2291050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2293698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2296621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2299241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2301881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2304546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2307249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2310792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2313498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2316140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2318832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2321457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2324104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2326756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2329401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2332047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2334680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2337330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2339969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2342613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2345355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2349529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2352200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2355015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2357684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2360331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2362953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2365593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2368224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2370861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2373497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2376091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2378793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2381566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2384276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2386940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2389582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2392229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2394879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2397826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2400489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2403173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2405814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2407224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2408651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2410054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2411548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2413319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2414853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2416287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2417686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2419154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2420568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2422005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2423399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2424839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2426270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2427720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2429127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2430610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2432136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2433618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2435077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2436503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2437950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2439378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2440822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2442240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2443679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2445125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2446552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2447958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2449446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2450965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2452493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2453920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2455370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2456811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2458221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2459660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2461082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2462536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2463953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2465402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2466826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2468337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2469889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2471374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2472800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2474257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2475678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2477093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2478596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2480015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2481433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2482842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2484312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2485741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2487215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2488718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2490199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2491625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2493052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2494476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2496162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2497622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2499025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2500444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2501862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2503307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2504732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2506269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2507811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2509317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2510719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2512149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2513557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2515012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2516418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2517820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2519285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2520708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2522132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2523537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2525036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2526546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2528022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2529424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2530868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2532298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2533725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2535170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2536618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2538046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2539464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2540904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2542327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2543822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2545360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2546834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2548251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2549697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2551104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2552534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2553960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2555428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2556833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2558241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2559687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2561112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2562582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2564080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2565591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2567019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2568451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2569865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2571302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2572730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2574167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2575603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2577043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2578476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2579890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2581369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2582929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2584411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2585828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2587262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2588678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2590115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2591521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2592957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2594420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2596096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2597520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2598947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2600430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2601989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2603470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2604887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2606332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2607763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2609184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2610592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2612037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2613460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2614906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2616324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2617751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2619249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2620745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2622207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2623621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2625093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2626490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2627909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2629317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2630761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2632167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2633586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2635035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2636481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2637931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2639461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2640932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2642363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2643805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2645248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2646687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2648112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2649550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2650967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2652397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2653822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2655266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2656713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2658238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2659717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2661132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2662555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2663978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2665448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2666861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2668294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2669715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2671160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2672570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2673994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2675482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2677030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2678483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2679908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2681328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2682754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2684177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2685607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2687043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2688456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2689870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2691273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2692702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2694161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2695934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2697429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2698856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2700273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2701698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2703105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2704526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2705983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2707388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2708814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2710233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2711678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2713139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2714682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2716175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2717606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2719049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2720490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2721902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2723313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2724744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2726147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2727576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2728998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2730417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2731815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2733289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2734808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2736272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2737670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2739106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2740527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2741948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2743360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2744797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2746240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2747650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2749085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2750499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2752053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2753556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2755055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2756465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2757901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2759296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2760712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2762129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2763561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2764981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2766392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2767823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2769235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2770695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2772185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2773659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2775098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2776520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2777930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2779355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2780776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2782196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2783598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2785037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2786482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2787885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2789354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2790844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2792319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2793713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2795353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2796782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2798209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2799608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2801027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2802431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2803863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2805292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2806690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2808197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2809765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2811267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2812731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2814189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2815648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2817345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2818800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2820250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2821666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2823087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2824503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2825929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2827415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2828913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2830378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2831790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2833236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2834641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2836083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2837494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2838937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 73%] 2024-08-07T18:08:35.2840344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2841766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2843189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2844626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2846088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2847588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2849069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2850494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2851920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2853327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2854761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2856197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2857626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2859030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2860462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2861873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2863289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2864749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2866273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2867739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2869141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2870570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2871987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2873695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2875136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2876573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2877992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2879437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2880837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2882272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2883745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2885373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2886826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2888235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2889667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2891100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2892528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2893944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2895655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2897100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2898515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2899920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2901348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2902844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2904383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2905881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2907312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2908728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2910136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2911558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2912967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2914404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2915818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2917242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2918708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2920151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2921600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2923116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2924576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2926036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2927451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2928863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2930280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2931689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2933102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2934496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2935950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2937369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2938782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2940175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2941635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2943132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2944591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2946028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2947455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2948879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2950277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2951710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2953135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2954579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2956013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2957445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2958857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2960342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2961855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2963325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2964725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2966181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2967571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2968969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2970408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2971819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2973236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2974645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2976094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2977500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2978961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2980451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2981932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2983352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2984779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2986207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2987644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2989076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2990480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2991900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2993317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2994772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2996451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2997956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.2999477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3000966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3002365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3003787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3005197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3006626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3008026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3009447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3010853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3012269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3013700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3015117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3016606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3018157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3019630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3021025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3022459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3023876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3025306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3026713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3028147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3032051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3035362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3036783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3038206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3039734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3041201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3042605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3044015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3045455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3046848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3048263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3049687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3051104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3052576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3054090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3055589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3057008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3058474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3059955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3061392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3062818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3064250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3065657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3067092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3068520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3069971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3071377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3072849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3074347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3075738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3077203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3078670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3080130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3081535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3082964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3084369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3085797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3087194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3088622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3090054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3091513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3092981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3094382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3096187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3097711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3099125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3100557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3101994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3103408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3104821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3106223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3107649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3109048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3110544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3112016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3113414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3114882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3116332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3117743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3119182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3120651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3122060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3123465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3124876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3126313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3127707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3129169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3130659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3132081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3133503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3134952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3136430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3137844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3139267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3140698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3142116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3143522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3144940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3146328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3147807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3149272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3150695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3152089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3153571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3155026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3156431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3157833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3159276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3160693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3162112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3163516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3164945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3166402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3167856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3169282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3170711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3172201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3173651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3175071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3176477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3177914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3179301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3180730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3182141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3183569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3185053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3186525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3187930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3189334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3190793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3192244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3193661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3195305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3196741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3198131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3199557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3200972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3202383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3203775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3205275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3206760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3208148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3209657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3211110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3212522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3213914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3215316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3216711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3218170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3219572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3220990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3222385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3223854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3225298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3226702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3228156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3229621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3231025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3232482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3233917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3235337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3236751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3238154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3239597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3241008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3242470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3243942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3245363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3246812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3248253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3249670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3251076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3252516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3253905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3255317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3256721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3258154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3259563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3261025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3262498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3263915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3265349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3266817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3268219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3269654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3271073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3272480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3273898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3275311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3276733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3278125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3279610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3281071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3282471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3283874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3285343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3286797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3288183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3289620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3291032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3292451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3293840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3295486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3296912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3298424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3299906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3301331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3302737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3304230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3306423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3307824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3309255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3310701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3312112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3313512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3314939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3316340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3317804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3319303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3320758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3322157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3323613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3325061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3326480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3327895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3329295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3330724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3332125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3333550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3334940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3336349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3337798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3339546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3340980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3342458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3343914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3345342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3346796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3348226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3349613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3351049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3352488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3353883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3355289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3356754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3358219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3359605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3361085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3362544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3363942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3365327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3366755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3368167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3369574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3370998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3372419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3373843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3375287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3376751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3378153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3379587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3381115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3382595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3383984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3385402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3386793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3388192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3389592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3391056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3392446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3393887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3395656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3397088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3398496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3399975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3401496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3402905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3404319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3405721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3407143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3408554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3409975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3411397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3412928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3414436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3415840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3417251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3418736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3420225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3421631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3423042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3424450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3425888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3427261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3428675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3430078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3431515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3432943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3434394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3435806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3437245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3438707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3440093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3441535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3442948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3444357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3445747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3447166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3448571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3449977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3451438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3452885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3454291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3455726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3457178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3458569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3459988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3461397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3462802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3464190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3465621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3466994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3468394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3469838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3471321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3472727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3474164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3475687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3477092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3478499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3479899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3481352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3482760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3484172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3485578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3486986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3488434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3489889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3491297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3492695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3494157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3495844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 74%] 2024-08-07T18:08:35.3497261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3498669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3500101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3501507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3502919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3504323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3505737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3507257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3508731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3510126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3511546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3513011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3514472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3515888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3517295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3518750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3520148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3521585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3522991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3524386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3525811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3527306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3528713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3530087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3531554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3533004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3534414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3535799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3537219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3538611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3540032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3541444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3542854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3544248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3545717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3547159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3548570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3550010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3551489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3552895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3554292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3555708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3557099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3558493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3559879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3561315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3562710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3564146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3565587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3566996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3568433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3569870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3571292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3572687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3574113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3575504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3576909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3578312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3579748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3581151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3582601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3584052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3585473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3586860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3588299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3589753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3591162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3592562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3593953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3595615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3597040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3598449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3599824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3601343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3602810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3604214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3605597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3607045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3608530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3609905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3611331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3612736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3614154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3615537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3616948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3618378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3619793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3621242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3622705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3624091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3625533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3626965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3628336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3629749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3631173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3632564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3633940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3635343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3636739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3638134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3639561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3641041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3642445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3643894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3645332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3646734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3648168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3649568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3651012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3652463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3653890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3655288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3656692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3658135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3659600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3660998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3662401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3663839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3665318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3666695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3668080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3669499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3670915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3672316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3673709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3675139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3676599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3678077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3679478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3680931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3682403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3683875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3685280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3686702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3688154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3689558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3691005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3692420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3693851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3695525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3697033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3698437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3699862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3701328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3702860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3704272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3705711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3707116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3708514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3709947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3711384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3712797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3714247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3715728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3717153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3718617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3720095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3721578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3722989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3724410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3725818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3727222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3728651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3730060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3731476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3732930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3734411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3735816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3737230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3738698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3740204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3741596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3743018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3744434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3745864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3747258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3748660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3750106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3751511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3752970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3754433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3755850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3757304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3758771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3760194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3761615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3763034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3764445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3765840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3767242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3768785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3770175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3771635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3773106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3774538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3776013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3777480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3778889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3780327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3781737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3783176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3784597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3786042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3787454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3788860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3790347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3791827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3793271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3794725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3796473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3797889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3799305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3800706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3802127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3803558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3804975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3806383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3807788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3809298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3810781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3812206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3813711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3815220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3816620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3818043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3819504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3820944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3822338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3823787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3825208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3826649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3828098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3829561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3830981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3832435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3833923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3835324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3836754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3838172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3839594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3840992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3842423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3843865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3845279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3846720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3848196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3849604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3851041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3852504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3853943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3855370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3856767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3858200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3859604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3861037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3862442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3863938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3865420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3866897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3868278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3869671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3871135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3872594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3874020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3875421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3876857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3878267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3879676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3881078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3882507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3883997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3885468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3886868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3888309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3889776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3891228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3892653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3894087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3895763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3897187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3898621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3900036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3901463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3902947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3904438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3905855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3907284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3908739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3910211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3911635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3913063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3914497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3915902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3917329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3918793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3920205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3921649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3923138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3924557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3925965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3927407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3928890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3930300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3931701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3933144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3934547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3935975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3937365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3938774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3940217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3941749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3943159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3944570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3946012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3947506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3948887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3950284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3951719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3953200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3954615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3956017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3957450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3958860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3960324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3961780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3963228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3964688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3966155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3967549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3968970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3970385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3971769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3973199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3974607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3976035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3977420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3978868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3980322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3981746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3983199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3984674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3986074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3987503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3988893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3990301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3991729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3993169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3994591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3996232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3997747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.3999235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4000649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4002110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4003623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4005016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4006426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4007821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4009252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4010657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4012047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4013488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4014886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4016357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4017803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4019259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4020719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4022200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4023611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4025018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4026421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4027857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4029247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4030645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4032076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4033507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4034962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4036404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4037813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4039215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4040664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4042101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4043544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4044952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4046371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4047760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4049177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4050586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4051971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4053446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4054905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4056338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4057730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4059188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4060655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4062085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4063506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4064936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4066342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4067775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4069179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4070581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4072037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4073550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4074952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4076343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4077809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4079271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4080675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4082073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4083517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4084920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4086332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4087728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4089147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4090557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4091996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4093496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4094905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4096723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4098190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4099612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4101024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4102462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4103864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4105286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4106691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4108116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4109505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4110966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4112503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4113973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4115619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4117237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4118688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4120125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4121528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4122957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4124362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4125797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4127197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4128613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4130077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4131556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4132982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4134435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4135922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4137340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4138753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4140162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4141609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4143021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4144433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4145834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4147255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 75%] 2024-08-07T18:08:35.4148716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4150160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4151574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4153033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4154521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4155910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4157326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4158737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4160166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4161558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4162992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4164409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4165834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4167278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4168755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4170154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4171566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4173036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4174488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4175899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4177294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4178705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4180087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4181502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4182933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4184338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4185766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4187239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4188635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4190020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4191477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4192949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4194383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4196043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4197492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4198897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4200330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4201737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4203181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4204666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4206167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4207572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4208969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4210442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4211915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4213341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4214734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4216169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4217575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4219031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4220440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4221861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4223336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4224796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4226198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4227622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4229097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4230566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4231982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4233421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4234860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4236257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4237815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4239271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4240757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4242160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4243665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4245134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4246557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4247995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4249694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4251471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4253019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4254476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4255880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4257301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4258714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4260264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4261773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4263834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4266122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4267525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4268970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4270597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4272125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4273526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4274949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4276384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4278636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4280052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4281710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4283120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4284595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4286037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4287670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4289136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4290802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4292903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4295324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4296775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4298192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4299620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4301371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4302836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4304726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4306418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4308481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4310961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4313104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4314631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4316249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4317938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4320070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4321816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4323249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4324670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4326102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4327489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4328953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4330418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4331841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4333234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4334718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4336172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4337597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4339002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4340425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4341819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4343225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4344656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4346049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4347510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4348969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4350381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4351772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4353234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4354709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4356114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4357513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4358939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4360335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4361720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4363135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4364531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4365990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4367427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4368837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4370235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4371700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4373132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4374545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4375947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4377380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4378765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4380156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4381581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4383010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4384417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4385892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4387386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4388797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4390251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4391704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4393124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4394527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4396185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4397592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4399012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4400420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4401813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4403227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4404709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4406203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4407590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4409058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4410525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4411947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4413341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4414765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4416177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4417599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4419038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4420450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4421869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4423332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4424795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4426188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4427646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4429099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4430491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4431876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4433306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4434728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4436128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4437520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4438926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4440344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4441772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4443243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4444640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4446060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4447489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4448945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4450338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4460209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4461779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4463206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4464615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4466046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4467438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4468933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4470392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4471822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4473202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4474645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4476094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4477486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4478888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4480287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4481705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4483105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4484507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4485884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4487294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4488747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4490202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4491608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4493068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4494529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4496329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4497774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4499191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4500648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4502103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4503569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4505152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4506574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4508063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4509557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4510951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4512448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4513905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4515312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4516715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4518131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4519589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4520971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4522416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4523837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4525242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4526680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4528156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4529561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4531028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4532503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4533934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4535353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4536755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4538174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4539567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4540989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4542400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4543807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4545252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4546730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4548116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4549525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4550967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4552520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4553915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4555313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4556744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4558155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4559566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4560969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4562416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4563879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4565347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4566736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4568157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4569620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4571079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4572498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4573917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4575326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4576709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4578118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4579523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4580945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4582413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4583888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4585300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4586753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4588188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4589671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4591051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4592569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4594030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4595696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4597148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4598779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4601019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4603409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4605513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4607793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4609751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4611613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4613846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4615912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4617951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4620105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4621777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4623941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4625993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4628220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4630202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4632715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4634950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4637075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4638973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4640534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4641932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4643350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4644836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4646296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4647690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4649085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4650514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4651995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4653442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4654897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4656327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4657771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4659221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4660600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4662018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4663427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0004s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4664908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4666329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4667791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4669232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4670620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4672077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4673540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4674967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4676384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4677841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4679303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4680730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4682128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4683567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4684957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4686411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4687811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4689209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4690684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4692138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4693540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4694927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4696748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4698274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4699743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4701138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4702567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4703967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4705373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4706794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4708189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4709660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4711113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4712521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4713912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4715378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4716842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4718249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4719694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4721129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4722513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4723922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4725314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4726752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4728127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4729551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4731015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4732414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4733853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4735293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4736728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4738123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4739526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4740903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4742321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4743809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4745222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4746637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4748091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4749558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4750947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4752398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4753871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4755293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4756708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4758134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4759534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4760948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4762333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4763741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4765138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4766673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4768136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4769527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4771050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4772513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4773916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4775305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4776745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4778151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4779551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4780938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4782364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4783769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4785218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4786686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4788083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4789509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4790944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4792403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4793855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4795546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4796976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4798377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4799773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4801260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4802644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4804122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4805586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4807011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4808497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4809948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4811440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4812841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4814250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4815634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4817064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4818469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4819933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4821320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4822797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4824253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4825694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4827119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4828564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4830031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 76%] 2024-08-07T18:08:35.4831406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4832805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4834205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4835621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4837016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4838423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4839878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4841303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4842729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4844183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4845576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4847035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4848491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4849871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4851284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4852742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4854157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4855543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4856978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4858392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4859794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4861229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4862691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4864075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4865492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4866961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4868347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4869819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4871205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4872619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4874008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4875422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4876830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4878228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4879664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4881131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4882507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4883890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4885407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4886882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4888275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4889664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4891435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4892874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4894329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4895971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4897411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4898898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4900391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4901774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4903162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4904646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4906113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4907534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4908929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4910351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4911724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4913119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4914519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4915943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4917342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4918833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4920299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4921726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4923157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4924605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4926026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4927460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4928877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4930272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4931685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4933088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4934494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4935940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4937429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4938887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4940281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4941708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4943155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4944570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4945984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4947397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4948825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4950263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4951672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4953108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4954532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4956027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4957577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4959014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4960483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4961985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4963390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4964817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4966229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4967685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4969077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4970544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4971987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4973401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4974878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4976341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4977786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4979204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4980676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4982123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4983554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4985037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4986471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4987885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4989312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4990763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4992171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4993648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4995363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4996970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4998390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.4999965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5001444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5002875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5004275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5005766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5007191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5008634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5010037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5011442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5012928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5014410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5015824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5017236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5018755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5020232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5021704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5023107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5024551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5025969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5027405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5028818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5030249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5031707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5033152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5034580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5035984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5037470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5038937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5040348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5041754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5043188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5044575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5046001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5047439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5048866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5050297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5051744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5053185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5054600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5056062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5057548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5058988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5060418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5061851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5063262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5064701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5066134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5067588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5068996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5070541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5072015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5073409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5074878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5076351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5077813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5079220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5080651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5082064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5083503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5084912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5086339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5087781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5089284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5090726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5092131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5093616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5095339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5096783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5098224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5099675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5101097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5102525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5104005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5105446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5106857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5108376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5109846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5111276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5112764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5114238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5115658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5117081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5118540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5119981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5121406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5122824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5124265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5125659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5127120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5128597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5130032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5131468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5132932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5134356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5135771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5137197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5138608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5140029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5141436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5142851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5144242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5145709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5147180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5148592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5150032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5151512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5152967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5154371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5155788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5157220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5158656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5160068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5161507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5162925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5164415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5165877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5167310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5168728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5170218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5171738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5173174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5174590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5176013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5177450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5178856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5180296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5181712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5183180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5184650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5186080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5187517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5188980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5190438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5191870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5193286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5194690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5196347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5197858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5199313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5200716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5202220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5203716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5205149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5206551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5208060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5209529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5210954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5212355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5213763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5215189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5216602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5218056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5219501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5220969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5222429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5223843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5225236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5226708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5228196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5229608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5231013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5232461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5233867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5235271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5236712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5238138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5239616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5241068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5242494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5243898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5245367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5247497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5248934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5250343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5251781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5253170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5254639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5256120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5257541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5258972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5260439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5261924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5263335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5264818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5266268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5267701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5269192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5270623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5272023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5273458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5274884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5276301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5277731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5279218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5280693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5282085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5283620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5285090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5286517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5287913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5289358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5290764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5292191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5293586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5295502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5297155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5298700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5300272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5301671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5303158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5304646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5306055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5307454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5308950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5310388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5311797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5313203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5314624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5316027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5317464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5318967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5320390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5321858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5323303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5324783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5326182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5327614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5329017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5330442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5331918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5333362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5334764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5336266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5337752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5339172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5340599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5342052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5343535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5344951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5346492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5347907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5349335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5350749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5352165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5353559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5355031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5356504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5357886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5359328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5360781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5362259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5363650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5365115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_1024_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5366532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5367975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5369357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5370789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5372197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5373671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5375120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5376529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5377951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5379420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5380896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5382302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5383796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5385212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5386620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5388025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5389461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5390871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5392272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5393711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5395510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5396936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5398427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5399937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5401346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5402847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5404257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5405671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5407082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5408519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5409997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5411421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5412900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5414408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5415799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5417308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5418830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5420262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5421672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5423074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5424490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5425900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5427316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5428715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5430156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5431614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5433073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5434464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5435932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5437393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5438780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5440223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5441637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5443072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5444466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5445880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5447294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5448776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5450258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5451726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5453163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5454627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5456059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5457444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5458861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5460295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5461704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5463090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5464511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5465924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5467331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5468778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5470333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5471742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5473152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5474622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5476095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5477524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5478929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5480376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5481786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5483243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5484633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5486052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 77%] 2024-08-07T18:08:35.5487497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5488978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5490394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5491798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5493239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5494713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5496352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5497759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5499189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5500620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5502032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5503436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5504859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5506372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5507865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5509270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5510716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5512186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5513668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5515067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5516480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5517920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5519355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5520801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5522210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5523634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5525064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5526532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5527941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5529358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5530811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5532284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5533683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5535115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5536518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5537912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5539331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5540758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5542166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5543556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5545040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5546505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5547915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5549354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5550843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5552252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5553664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5555060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5556449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5557864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5559251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5560671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5562058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5563527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5564963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5566369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5567809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5569282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5570678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5572084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5573505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5574945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5576338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5577810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5579259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5580681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5582143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5583598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5585026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5586486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5587954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5589381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5590801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5592211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5593620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5595239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5596679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5598121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5599519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5601056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5602537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5603961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5605353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5606829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5608308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5609740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5611129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5612548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5613953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5615379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5616774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5618168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5619689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5621155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5622558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5623953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5625416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5626864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5628259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5629671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5631092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5632492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5633899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5635296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5636697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5638151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5639604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5641023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5642424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5643894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5645335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5646742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5648149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5649603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5650995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5652411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5653820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5655222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5656668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5658115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5659554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5660957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5662386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5663833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5665242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5666647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5668063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5669472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5670882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5672329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5673737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5675124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5676572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5678047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5679454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5680924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5682393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5683819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5685214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5686636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5688037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5689487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5690883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5692303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5693696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5695428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5696912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5698295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5699794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5701271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5702676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5704069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5705488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5706898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5708295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5709713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5711145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5712541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5713989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5715451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5716847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5718312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5719831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5721237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5722630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5724057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5725452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5726860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5728253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5729692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5731071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5732515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5733984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5735380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5736780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5738210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5739691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5741087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5742494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5743885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5745298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5746702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5748115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5749518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5750982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5752503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5753901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5755309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5756754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5758231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5759644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5761071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5762472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5763889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5765270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5766680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5768072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5769513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5770942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5772400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5773791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5775230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5776689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5778075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5779534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5780959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5782376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5783776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5785217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5786644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5788062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5789540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5791029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5792444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5793889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5795610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5797035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5798472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5799901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5801324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5802735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5804178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5805584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5807009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5808516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5810021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5811421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5812906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5814396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5815812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5817236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5818653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5820144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5821572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5823002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5824424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5825861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5827346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5828833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5830255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5831723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5833194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5834584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5836006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5837424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5838863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5840278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5841708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5843128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5844563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5846008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5847490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5848896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5850374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5851827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5853239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5854674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5856076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5857499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5858911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5860347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5861776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5863195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5864646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5866123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5867527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5868989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5870442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5871865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5873272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5874665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5876089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5877490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5878934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5880342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5881760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5883221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5884724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5886123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5887552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5889034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5890609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5892011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5893440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5894883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5896492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5897930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5899362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5900784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5902269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5903759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5905162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5906595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5908074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5909579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5910981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5912417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5913837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5915255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5916664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5918092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5919595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5921050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5922529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5923961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5925399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5926861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5928356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5929791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5931237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5932652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5934079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5935494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5936917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5938336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5939807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5941292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5942721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5944140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5945591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5947084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5948500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5949960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5951379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5952808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5954231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5955656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5957063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5958526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5960035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5961450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5962876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5964351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5965847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5967247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5968679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5970130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5971539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5972933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5974350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5975760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5977170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5978628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5980100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5981520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5982973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5984441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5985839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5987278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5988708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5990142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5991552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5993003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5994428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5996091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5997593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.5999089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6000542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6002012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6003508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6004923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6006365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6007769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6009203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6010627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6013369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6016039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6018703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6021552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6024289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6026960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6029664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6032423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6036356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6039021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6041653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6044318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6047024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6049684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6052388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6056277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6059045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6061785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6064432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6067140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6069851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6073031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6076463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6079121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6081782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6084423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6087075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6089725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6092689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6096363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6099124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6101764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6104513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6107259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6109891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6112892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6115777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6118450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6121137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6123797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6126500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6129128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6131860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6134573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6137220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6139872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6142550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6145248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6147903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6150552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6153203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6155849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6158530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6161159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6163815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6166529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6169248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6171888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6174528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6177232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6179936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6182609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6185278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6187940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6190601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6193272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6196199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6198862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6201637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6204351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6206987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6209625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6212336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6215088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6217715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6220447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6223118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6225750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6228370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6231019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6233665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6236346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6239052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6241703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6244334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6247046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6249740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6252384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 78%] 2024-08-07T18:08:35.6255047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6257723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6260353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6262969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6265610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6268243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6270861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6273571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6276297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6278910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6281590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6284284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6286939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6289586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6292222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6294865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6297804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6300490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6303132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6305793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6308612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6311320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6313983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6316703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6319502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6322154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6324803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6327462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6330115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6332760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6335406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6338061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6340712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6343400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6346094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6348713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6351474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6354244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6356872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_2048_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6359520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6362181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6364811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6367441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6370110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6372781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6375426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6378114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6381574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6384219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6386863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6389560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6392259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6394902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6397848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6400466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6403122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6405763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6408432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6411052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6413763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6416473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6419144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6421774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6424491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6427215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6429855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6432471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6435146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6437828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6440480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6443160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6445806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6448498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6451194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6453825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6456465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6459171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6461897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6464516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6467141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6469823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6472458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6475093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6477747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6480404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6483070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6485754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6488403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6491040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6493716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6496711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6499349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6502011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6504668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6507296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6509939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6512594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6515238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6517882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6520624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6523355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6525969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6528633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6531318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6533947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6536601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6539227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6541845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6544481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6547098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6549738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6552369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6555091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6557798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6560418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6563111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6565811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6568456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6571100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6573789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6576475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6579121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6581745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6584408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6587051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6589729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6592444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6595338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6598064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6600776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6603429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6606066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6608723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6611387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6613988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6616628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6619334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6621975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6624670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6627392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6630080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6632764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6635463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6638116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6640779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6643429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6646068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6648707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6651350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6654040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6656716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6659397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6662105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6664730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6667383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6670066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6672761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6675386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6678014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6680637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6683296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6685950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6688584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6691241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6693930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6696848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6699488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6702129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6704879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6707584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6710223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6712826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6715466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6718109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6720752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6723409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6726044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6728723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6731415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6734039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6736674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6739347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6742036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6744689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6747326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6749977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6752603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6755253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6757927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6760575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6763185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6765889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6768611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6771305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6773991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6776679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6779339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6781958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6784596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6787230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6789883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6792527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6795387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6798076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6800803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6803520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6806148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6808839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6811544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6814177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6816855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6819524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6822190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6824853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6827496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6830121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6832772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6835462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6838148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6840771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6843449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6846123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6848727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6851377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6854022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6856659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6859286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6861928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6864596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6867219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6869893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6872580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6875217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6877898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6880573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6883209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6885904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6888564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6891195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6893810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6896752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6899399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6902025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6904754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6907527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6910125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6912757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6915441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6918151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6920821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6923443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6926092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6928719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6931355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6933985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6936680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6939400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6942094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6944715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6947348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6950051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6952809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6955440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6958080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6960719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6963374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6966022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6968669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6971290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6973951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6976593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6979224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6981876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6984566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6987250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6989877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6992523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6995427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.6998135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7000784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7003425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7006111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7008699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7011420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7014143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7016771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7019522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7022250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7024873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7027525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7030161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7032776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7035395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7038022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7040632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7043227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7045906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7048584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7051191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7053873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7056569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7059170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7061795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7064460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7067115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7069759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7072376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7075023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7077673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7080375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7083056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7085746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7088417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7091078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7093789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7096718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7099353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7101977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7104584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7107213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7109864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7112510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7124717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7127515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7130170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7132802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7135479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7138187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7140872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7143502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7146134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7148773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7151407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7154054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7156678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7159379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7162082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7164724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7167369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7170025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7172711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7175342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7177972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7180599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7183228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7185827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7188444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7191057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7193711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7196834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7199527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7202153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7204864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7207567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7210178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7212814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7215488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7218104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7220779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7223421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7226074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7228728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7231402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7234092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7236722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7239387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7242055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7244682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7247324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7249928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7252628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7255271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7257917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7260550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7263180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7265855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7268558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7271185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7273795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7276494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7279192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7281808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7284409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7287044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7289723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7292354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7294978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7297870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7300563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7303239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7305835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7308461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7311150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7313847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7316428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7319045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7321722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7324352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7326974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7329606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7332225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7334903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7337582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7340214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7342860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7345564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7348230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7350890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7353546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7356193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7358834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7361459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7364094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7365507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7366895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7368361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7369819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7371218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7372654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7374228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7375630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7377053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7378432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7379858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7381263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7382658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7384099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7385498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7386972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7388422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7389836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7391282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7392767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7394176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7395843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7397253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7398673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7400050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7401436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7402852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7404272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7405759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7407225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7408639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7410036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7411495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7412941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7414369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7415774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7417183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7418560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7420021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7421435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7422812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 79%] 2024-08-07T18:08:35.7424332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7425787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7427201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7428580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7430028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7431470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7432878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7434264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7435670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7437056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7438479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7439860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7441239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7442642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7444089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7445542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7446918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7448391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7449863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7451265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7452662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7454099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7455509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7456918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7458319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7459727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7461153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7462585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7464099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7465492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7466951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7468392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7469787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7471181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7472606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7474002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7475406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7476799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7478206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7479599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7481027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7482497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7483902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7485296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7486726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7488192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7489582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7490986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7492382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7493808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7495467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7496920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7498308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7499784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7501271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7502648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7504053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7505509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7506995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7508369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7509772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7511171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7512581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7513973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7515388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7516775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7518225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7519723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7521126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7522541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7524000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7525476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7526854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7528267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7529676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7531071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7532456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7533872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7535265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7536641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7538082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7539527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7540939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7542359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7543839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7545230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7546647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7548031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7549438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7550825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7552295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7553706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7555095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7556598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7558052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7559452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7560842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7562342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7563806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7565208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7566597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7568012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7569400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7570795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7572186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7573576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7575032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7576464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7577861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7579248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7580704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7582137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7583537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7584939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7586360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7587735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7589134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7590539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7591960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7593392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7594844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7596519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7597916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7599389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7600849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7602247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7603633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7605044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7606417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7607819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7609211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7610618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7611992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7613459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7614934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7616305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7617742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7619191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7620642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7622028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7623433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7624853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7626272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7627657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7629072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7630464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7631925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7633362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7634768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7636175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7637610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7639064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7640443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7641853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7643254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7644667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7646048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7647460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7648852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7650288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7651719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7653133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7654619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7656059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7657549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7658957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7660425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7661833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7663248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7664679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7666119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7667509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7668972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7670418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7671818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7673221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7674674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7676154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7677562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7678977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7680382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7681805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7683258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7684698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7686097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7687524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7688984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7690458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7691853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7693305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7694802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7696509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7697946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7699362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7700794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7702194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7703625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7705034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7706508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7707983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7709468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7710870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7712345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7713878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7715276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7716711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7718132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7719582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7720983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7722399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7723890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7725291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7726721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7728203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7729608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7731076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7732531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7733947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7735381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7736780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7738198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7739595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7741015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7742410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7743824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7745264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7746745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7748130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7749582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7751042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7752441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7753858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7755254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7756686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7758094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7759569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7760976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7762404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7763879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7765352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7766753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7768183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7769639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7771110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7772534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7773963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7775399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7776787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7778202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7779611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7781033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7782465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7783958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7785355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7786774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7788207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7790455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7791860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7793280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7794727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7796440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7797892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7799310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7800730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7802211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7803731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7805174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7806589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7808050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7809541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7810942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7812350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7813751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7815180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7816612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7818009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7819460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7820916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7822396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7823788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7825229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7826667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7828146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7829528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7830945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7832361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7833767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7835203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7836617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7838034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7839438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7840890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7842338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7843748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7845211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7846664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7848047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7849467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7850876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7852304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7853713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7855144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7856573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7857957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7859411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7860872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7862299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7863736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7865231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7866643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7868088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7869481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7870914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7872331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7873757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7875209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7876615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7878093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7879554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7880960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7882396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7883881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7885305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7886718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7888118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7889546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7890953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7892347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7893775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7895470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7896996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7898463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7899871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7901276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7902761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7904226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7905654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7907060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7908493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7909876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7911294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7912698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7914098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7915553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7917048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7918465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7919908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7921390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7922835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7924272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7925730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7927141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7928525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7929948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7931361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7932744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7934195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7935662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7937094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7938480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7939937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7941420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7942843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7944237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7945673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7947063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7948481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7949866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7951266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7952660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7954107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7955584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7956987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7958421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7959881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7961281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7962674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7964100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7965518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7966937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7968315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7969742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7971150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7972582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7974053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7975458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7976928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7978382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7979786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7981182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7982608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7983999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7985401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7986806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7988236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7989622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7991063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7992533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7993954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7995728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7997225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.7998641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8000044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8001453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8002846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8004286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8005696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8007112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8008503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8009978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8011451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8012833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8014271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8015756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8017234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8018619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8020067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8021473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8022889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8024298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8025702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8027089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8028548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8029990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8031377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8032801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8034262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8035728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8037113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8038537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8039952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8041359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8042758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8044197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8045602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8047053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8048503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8049896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8051308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8052732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8054252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8055648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8057070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8058462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8059864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8061258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8062680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8064080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8065487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_587_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8066945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8068423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8069799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8071230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 80%] 2024-08-07T18:08:35.8072710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8074127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8075540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8076942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8078365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8079779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8081187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8082590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8084018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8085461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8086935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8088319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8089763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8091242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8092629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8094051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8095707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8097149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8098539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8099949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8101362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8102789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8104281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8105775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8107179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8108665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8110132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8111547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8112968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8114408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8115822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8117219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8118638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8120081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8121488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8122923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8124417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8125820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8127226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8128664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8130115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8131532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8132921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8134360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8135756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8137175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8138562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8139960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8141401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8142881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8144286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8145694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8147132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8148589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8149989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8151380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8152853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8154275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8155677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8157066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8158479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8159936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8161394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8162779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8164208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8165649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8167107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8168486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8169884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8171313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8172702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8174130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8175539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8177024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8178413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8179861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8181322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8182748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8184201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8185683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8187072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8188474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8189883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8191268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8192687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8194114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8195778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8197180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8198672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8200152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8201554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8203002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8204525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8205929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8207315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8208738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8210151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8211580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8212972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8214420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8215823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8217290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8218740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8220193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8221636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8223178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8224582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8225986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8227386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8228800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8230203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8231594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8233021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8234449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8235899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8237352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8238770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8240166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8241614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8243064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8244501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8245908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8247308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8248724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8250125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8251554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8252944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8254421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8255864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8257276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8258653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8260100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8261541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8262950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8264358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8265768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8267159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8268560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8269963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8271343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8272760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8274238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8275696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8277082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8278538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8280024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8281407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8282801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8284257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8285668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8287057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8288475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8289877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8291291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8292726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8294202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8296017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8297588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8299060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8300468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8301865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8303295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8304705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8306096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8307513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8308923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8310316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8311760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8313282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8314707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8316173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8317614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8319095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8320559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8321993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8323396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8324825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8326224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8327632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8329015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8330468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8331922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8333321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8334733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8336174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8337643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8339032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8340438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8341839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8343257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8344666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8346080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8347476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8348947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8350394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8351805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8353206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8354687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8356157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8357548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8358962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8360370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8361764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8363154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8364601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8366000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8367443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8368894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8370313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8371713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8373139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8374621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8376017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8377439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8378838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8380247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8381655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8383087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8384887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8386366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8387834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8389323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8390714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8392176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8393621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8395386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8396817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8398211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8399624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8401017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8402424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8403817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8405248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8406728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8408204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8409583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8411047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8412514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8413888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8415309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8416706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8418126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8419505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8420954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8422356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8423768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8425213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8426672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8428047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8429446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8430859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8432285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8433686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8435103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8436500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8437876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8439278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8440678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8442069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8443498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8444999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8446405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8447801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8449228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8450686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8452105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8453539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8454988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8456385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8457805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8459201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8460614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8461998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8463450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8464908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8466304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8467738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8469215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8470589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8471984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8473407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8474829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8476226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8477616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8479043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8480447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8481888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8483856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8485640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8487173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8488652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8490047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8491451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8492884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8494279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8495993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8497470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8498905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8500283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8501764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8503234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8504654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8506062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8507536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8509004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8510422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8511808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8513203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_128_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8514620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8516047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8517459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8518844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8520356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8521818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8523226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8524613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8526133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8527605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8529006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8530401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8531796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8533213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8534598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8536036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8537426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8538886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8540310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8541722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8543110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8544567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8546031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8547431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_160_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8548824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8550237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8551640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8553043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8554459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8555887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8557287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8558726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8560196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8561589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8563032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8564468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8565892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8567282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8568662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8570057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8571444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8572866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8574251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8575666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8577130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8578594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8579963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8581414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8582886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8584284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8585691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8587098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8588511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8589916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8591329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8592728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8594146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8595950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8597481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8598877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8600287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8601747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8603274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8604656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8606104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8607516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8608896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8610308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8611721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8613129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8614587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8616067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_192_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8617472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8618897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8620365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8621833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8623240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8624661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8626085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8627476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8628899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8630315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8631723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8633113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8634568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8636043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8637438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8638861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8640331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8641724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8643131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8644528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8645962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8647360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8648758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8650167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_203_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8651544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8653001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8654446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8655866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8657297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8658772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8660147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8661546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8662942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8664362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8665763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8667150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8668564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8669950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8671382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8672807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8674210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8675630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8677076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8678508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8679913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8681313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8682717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8684090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_21_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8685512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8686941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8688319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8689763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8691224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8692645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8694030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8695831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8697340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8698758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8700195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8701613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8703004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8704405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8705835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8707218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8708693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8710169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8711571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8712957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8714412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8715894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8717293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8718684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_256_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8720150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8721552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 81%] 2024-08-07T18:08:35.8722993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8724391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8725820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8727247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8728693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8730156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8731549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8733018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8734462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8735885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8737267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8738681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8740060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8741452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8742847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8744251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8745669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8747091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8749256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8750650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8752093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8753588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8754995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8756423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8757807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8759196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8760607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8762007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8763397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8764842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8766309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8767804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8769167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8770574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8772004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8773472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8774835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8776256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8777650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8779060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8780441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8781827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8783231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8784665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8786136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8787525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8788934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8790372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8791845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8793223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8794637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8796335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8797756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8799144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8800559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8801964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8803353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8804830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8806311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8807719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8809154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8810622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8812005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8813421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8814800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8816219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8817605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8819025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8820455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8821860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_72_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8823293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8824740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8826143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8827582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8829035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8830425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8831814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8833196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8834615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8836019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8837420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8838807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8840186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8841633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8843059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8844449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8845840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8847293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8848726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8850119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8851509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8852921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8854291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8855696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8857099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8858494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8859938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8861379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8862798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8864195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8865673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8867120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8868529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8869929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8871339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8872723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_False_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8874124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8875546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8876957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8878345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8879779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8881239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8882612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8884057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8885512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale0_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8886922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_bfloat16_scale_l1_cuda_bfloat16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8888297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8889698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_96_is_causal_True_dropout_p_0_48_float16_scale_l1_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8891241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8892774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8894303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8896141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8897677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8899265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8900880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8902397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8903988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8905608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8907162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8908686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8910235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8911744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8913281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8914801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8916351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8917901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8919498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8921041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8922619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8924186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8925720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8927246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8928780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8930292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8931807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8933341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8934851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8936439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8938022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8939545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8941101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8942685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8944195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8945722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8947253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8948774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8950289Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8951817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8953331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8954896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8956481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8958012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8959519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8961065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8962651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8964154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8965679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8967208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8968752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8970241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8971774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8973265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8974835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8976424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8977934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8979473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8981054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8982549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8984071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8985625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8987171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8988677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_32_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8990198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8991728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8993294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8994879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8996709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8998321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.8999924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9001446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9002957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9004485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9005994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9007537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9009054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9010583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9012166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9013767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9015282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9016828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9018386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9019986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9021514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9023018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9024558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9026064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9027603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9029111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9030641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9032191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9033768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9035261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9036854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9038415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_256_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9039940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9041441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9042981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9044478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9046002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9047527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9049033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9050605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9052164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9053730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9055281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9056877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9058382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9059896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9061413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9062931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_256_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9064430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9065952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9067482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9069056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9070604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9072121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9073625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9075190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_64_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9076768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9078271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9079787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9081293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_0_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9082810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_False_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9084309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale0_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9085842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_False_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9087362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_flash_attention_vs_math_ref_grads_nestedtensor_batch_size_8_max_seq_len_q_32_max_seq_len_kv_32_head_dim_8_dropout_p_0_1_float16_scale_l1_is_causal_True_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9088906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9090440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9091927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9093443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9094989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9096736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9098247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9099727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths0_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9101215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9102693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9104171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9105672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9107161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9108751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9110301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9111808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths1_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9113341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9114909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9116375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9117890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9119371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9120905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9122374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9123871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_1_sequence_legnths2_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9125352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9126828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9128385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9129915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9131399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9132914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9134459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9135928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths0_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9137422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9138929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9140414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9141881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9143378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9144854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9146384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9147904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths1_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9149400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_32_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9150895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_32_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9152429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_32_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9153984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_32_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0003s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9155454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_64_is_causal_False_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9156955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_64_is_causal_False_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9158429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_64_is_causal_True_dropout_p_0_0_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9159920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_attention_vs_math_ref_grads_cudagraph_batch_size_8_sequence_legnths2_head_dim_64_is_causal_True_dropout_p_0_22_float16_fused_kernel0_cuda_float16 SKIPPED [0.0002s] (Does not support SDPA or pre-SM80 hardware) [ 82%] 2024-08-07T18:08:35.9160824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_backwards_throws_determinism_warning_fused_kernel0_warn_only_False_cuda PASSED [0.0046s] [ 82%] 2024-08-07T18:08:35.9161737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_backwards_throws_determinism_warning_fused_kernel0_warn_only_True_cuda PASSED [0.0082s] [ 82%] 2024-08-07T18:08:35.9163179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0259s] [ 82%] 2024-08-07T18:08:35.9164604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0262s] [ 82%] 2024-08-07T18:08:35.9166098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0260s] [ 82%] 2024-08-07T18:08:35.9167582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0272s] [ 82%] 2024-08-07T18:08:35.9169036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0256s] [ 82%] 2024-08-07T18:08:35.9170454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0269s] [ 82%] 2024-08-07T18:08:35.9171937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0256s] [ 82%] 2024-08-07T18:08:35.9173413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0221s] [ 82%] 2024-08-07T18:08:35.9174858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0243s] [ 82%] 2024-08-07T18:08:35.9176283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0238s] [ 82%] 2024-08-07T18:08:35.9177730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0252s] [ 82%] 2024-08-07T18:08:35.9179139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0253s] [ 82%] 2024-08-07T18:08:35.9180577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0248s] [ 82%] 2024-08-07T18:08:35.9181999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0246s] [ 82%] 2024-08-07T18:08:35.9183420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0247s] [ 82%] 2024-08-07T18:08:35.9184850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0201s] [ 82%] 2024-08-07T18:08:35.9186322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0229s] [ 82%] 2024-08-07T18:08:35.9187816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0234s] [ 82%] 2024-08-07T18:08:35.9189234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0233s] [ 82%] 2024-08-07T18:08:35.9190728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0247s] [ 82%] 2024-08-07T18:08:35.9192208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0231s] [ 82%] 2024-08-07T18:08:35.9193642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0243s] [ 82%] 2024-08-07T18:08:35.9195308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0236s] [ 82%] 2024-08-07T18:08:35.9196778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0193s] [ 82%] 2024-08-07T18:08:35.9198204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0250s] [ 82%] 2024-08-07T18:08:35.9199635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0227s] [ 82%] 2024-08-07T18:08:35.9201082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0224s] [ 82%] 2024-08-07T18:08:35.9202506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0226s] [ 82%] 2024-08-07T18:08:35.9203940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0221s] [ 82%] 2024-08-07T18:08:35.9205420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0227s] [ 82%] 2024-08-07T18:08:35.9206921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0231s] [ 82%] 2024-08-07T18:08:35.9208333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_False_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0188s] [ 82%] 2024-08-07T18:08:35.9209776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0214s] [ 82%] 2024-08-07T18:08:35.9211313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0224s] [ 82%] 2024-08-07T18:08:35.9212837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0220s] [ 82%] 2024-08-07T18:08:35.9214252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0233s] [ 82%] 2024-08-07T18:08:35.9215696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0217s] [ 82%] 2024-08-07T18:08:35.9217122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0224s] [ 82%] 2024-08-07T18:08:35.9218539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0229s] [ 82%] 2024-08-07T18:08:35.9220008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0192s] [ 82%] 2024-08-07T18:08:35.9221449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0205s] [ 82%] 2024-08-07T18:08:35.9222888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0211s] [ 82%] 2024-08-07T18:08:35.9224303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0211s] [ 82%] 2024-08-07T18:08:35.9225773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0219s] [ 82%] 2024-08-07T18:08:35.9227258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0212s] [ 82%] 2024-08-07T18:08:35.9228684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0217s] [ 82%] 2024-08-07T18:08:35.9230140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0219s] [ 82%] 2024-08-07T18:08:35.9231621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_False_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0182s] [ 82%] 2024-08-07T18:08:35.9233056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0208s] [ 82%] 2024-08-07T18:08:35.9234489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0212s] [ 82%] 2024-08-07T18:08:35.9235911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0212s] [ 82%] 2024-08-07T18:08:35.9237333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0218s] [ 82%] 2024-08-07T18:08:35.9238766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0212s] [ 82%] 2024-08-07T18:08:35.9240177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0221s] [ 82%] 2024-08-07T18:08:35.9241615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0217s] [ 82%] 2024-08-07T18:08:35.9243035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_False_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0183s] [ 82%] 2024-08-07T18:08:35.9244510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0039s] [ 82%] 2024-08-07T18:08:35.9245971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0039s] [ 82%] 2024-08-07T18:08:35.9247410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0038s] [ 82%] 2024-08-07T18:08:35.9248811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_False_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0039s] [ 82%] 2024-08-07T18:08:35.9250281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_False_cuda PASSED [0.0039s] [ 82%] 2024-08-07T18:08:35.9251739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_False_expand_v_num_heads_True_cuda PASSED [0.0040s] [ 82%] 2024-08-07T18:08:35.9253189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_False_cuda PASSED [0.0040s] [ 82%] 2024-08-07T18:08:35.9254609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_kernel0_expand_q_batch_True_expand_k_batch_True_expand_v_batch_True_expand_q_num_heads_True_expand_k_num_heads_True_expand_v_num_heads_True_cuda PASSED [0.0040s] [ 82%] 2024-08-07T18:08:35.9255379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_nested_broadcasting_query_dense_cuda PASSED [0.0317s] [ 82%] 2024-08-07T18:08:35.9256128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_kernels_seq_len_1_inputs_fused_kernel0_cuda PASSED [0.0316s] [ 82%] 2024-08-07T18:08:35.9256773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_sdp_choice_type_dense_cuda PASSED [0.0021s] [ 82%] 2024-08-07T18:08:35.9257436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_fused_sdp_choice_type_nested_cuda PASSED [0.0022s] [ 82%] 2024-08-07T18:08:35.9258216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_long_sequence_mask_float16_cuda_float16 PASSED [0.0015s] [ 82%] 2024-08-07T18:08:35.9258995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_long_sequence_mask_float32_cuda_float32 PASSED [0.0015s] [ 82%] 2024-08-07T18:08:35.9259721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_non_contig_mask_bug_cuda PASSED [0.0021s] [ 82%] 2024-08-07T18:08:35.9260511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_non_contiguous_mask_float16_cuda_float16 PASSED [0.0024s] [ 82%] 2024-08-07T18:08:35.9261315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_non_contiguous_mask_float32_cuda_float32 PASSED [0.0038s] [ 82%] 2024-08-07T18:08:35.9262036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_pad_mask_float16_cuda_float16 PASSED [0.0022s] [ 82%] 2024-08-07T18:08:35.9262765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_attention_pad_mask_float32_cuda_float32 PASSED [0.0023s] [ 82%] 2024-08-07T18:08:35.9263899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_eff_backwards_determinism_cuda SKIPPED [0.0003s] (This test is not behaving deterministaclly non-deterministaclly on CI/CD) [ 82%] 2024-08-07T18:08:35.9265235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0197s] [ 82%] 2024-08-07T18:08:35.9266537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 82%] 2024-08-07T18:08:35.9267803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0089s] [ 82%] 2024-08-07T18:08:35.9270683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131105 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9273550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131126 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9276360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131168 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9277751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 [W807 18:05:34.668822508 attention.cpp:797] Warning: Dropout mask should only be used for testing purposes. (function operator()) 2024-08-07T18:08:35.9278047Z ('RERUN', {'yellow': True}) [0.0075s] [ 82%] 2024-08-07T18:08:35.9279387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 ('RERUN', {'yellow': True}) [0.0067s] [ 82%] 2024-08-07T18:08:35.9280685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 82%] 2024-08-07T18:08:35.9283521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131383 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9284886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0075s] [ 82%] 2024-08-07T18:08:35.9287699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131435 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9290535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131090 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9293329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131139 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9296411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0015s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131495 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9299224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131609 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9302000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131624 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9304928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131537 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9307772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131181 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9309135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 82%] 2024-08-07T18:08:35.9311966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131605 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9314777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131606 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9317640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131602 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9320463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131633 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9323313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 SKIPPED [0.0009s] (Test is disabled because an issue exists disabling it: https://github.com/pytorch/pytorch/issues/131646 for platform(s) linux. If you're seeing this on your local machine and would like to enable this test, please make sure CI is not set and you are not using the flag --import-disabled-tests.) [ 82%] 2024-08-07T18:08:35.9324675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 82%] 2024-08-07T18:08:35.9325966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0073s] [ 82%] 2024-08-07T18:08:35.9327243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 82%] 2024-08-07T18:08:35.9328579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 82%] 2024-08-07T18:08:35.9329905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 82%] 2024-08-07T18:08:35.9331192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0112s] [ 82%] 2024-08-07T18:08:35.9332477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0100s] [ 82%] 2024-08-07T18:08:35.9333819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 ('RERUN', {'yellow': True}) [0.0060s] [ 82%] 2024-08-07T18:08:35.9335183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 ('RERUN', {'yellow': True}) [0.0059s] [ 82%] 2024-08-07T18:08:35.9336453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 82%] 2024-08-07T18:08:35.9337755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 82%] 2024-08-07T18:08:35.9339094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 ('RERUN', {'yellow': True}) [0.0096s] [ 82%] 2024-08-07T18:08:35.9340383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0111s] [ 82%] 2024-08-07T18:08:35.9341760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 ('RERUN', {'yellow': True}) [0.0103s] [ 82%] 2024-08-07T18:08:35.9343107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 82%] 2024-08-07T18:08:35.9344387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 82%] 2024-08-07T18:08:35.9345676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 82%] 2024-08-07T18:08:35.9346992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0137s] [ 82%] 2024-08-07T18:08:35.9348334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0139s] [ 82%] 2024-08-07T18:08:35.9349626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 82%] 2024-08-07T18:08:35.9350908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 82%] 2024-08-07T18:08:35.9352205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0101s] [ 82%] 2024-08-07T18:08:35.9353596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 ('RERUN', {'yellow': True}) [0.0098s] [ 82%] 2024-08-07T18:08:35.9354963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 ('RERUN', {'yellow': True}) [0.0099s] [ 82%] 2024-08-07T18:08:35.9356243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0113s] [ 82%] 2024-08-07T18:08:35.9357540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 82%] 2024-08-07T18:08:35.9358812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 82%] 2024-08-07T18:08:35.9360209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 ('RERUN', {'yellow': True}) [0.0132s] [ 82%] 2024-08-07T18:08:35.9361592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 ('RERUN', {'yellow': True}) [0.0130s] [ 82%] 2024-08-07T18:08:35.9362876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0140s] [ 82%] 2024-08-07T18:08:35.9364183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0143s] [ 82%] 2024-08-07T18:08:35.9365525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 82%] 2024-08-07T18:08:35.9366871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 82%] 2024-08-07T18:08:35.9368137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 82%] 2024-08-07T18:08:35.9369485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 ('RERUN', {'yellow': True}) [0.0109s] [ 82%] 2024-08-07T18:08:35.9370819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 ('RERUN', {'yellow': True}) [0.0114s] [ 82%] 2024-08-07T18:08:35.9372129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 FAILED [0.0109s] [ 82%] 2024-08-07T18:08:35.9372151Z 2024-08-07T18:08:35.9372330Z ==================================== RERUNS ==================================== 2024-08-07T18:08:35.9373378Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 _ 2024-08-07T18:08:35.9373553Z Traceback (most recent call last): 2024-08-07T18:08:35.9374136Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9374298Z check_out_and_grad( 2024-08-07T18:08:35.9374689Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9375066Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9375441Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9375582Z raise ValueError(msg) 2024-08-07T18:08:35.9375983Z ValueError: grad_query Test error 0.7423312840272539 is greater than threshold 1.857900247190236e-05! 2024-08-07T18:08:35.9376002Z 2024-08-07T18:08:35.9376266Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9377261Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 2024-08-07T18:08:35.9377319Z 2024-08-07T18:08:35.9377645Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9378684Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 _ 2024-08-07T18:08:35.9378838Z Traceback (most recent call last): 2024-08-07T18:08:35.9379418Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9379554Z check_out_and_grad( 2024-08-07T18:08:35.9379945Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9380346Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9380753Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9380953Z raise ValueError(msg) 2024-08-07T18:08:35.9381349Z ValueError: grad_query Test error 0.7423312840272539 is greater than threshold 1.857900247190236e-05! 2024-08-07T18:08:35.9381368Z 2024-08-07T18:08:35.9381608Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9382573Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 2024-08-07T18:08:35.9382592Z 2024-08-07T18:08:35.9382907Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9383937Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 _ 2024-08-07T18:08:35.9384094Z Traceback (most recent call last): 2024-08-07T18:08:35.9384660Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9384812Z check_out_and_grad( 2024-08-07T18:08:35.9385200Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9385578Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9385954Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9386092Z raise ValueError(msg) 2024-08-07T18:08:35.9386509Z ValueError: grad_query Test error 0.5072804925091745 is greater than threshold 1.4052779395874737e-05! 2024-08-07T18:08:35.9386527Z 2024-08-07T18:08:35.9386767Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9387703Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 2024-08-07T18:08:35.9387726Z 2024-08-07T18:08:35.9388038Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9389065Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 _ 2024-08-07T18:08:35.9389235Z Traceback (most recent call last): 2024-08-07T18:08:35.9389797Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9389931Z check_out_and_grad( 2024-08-07T18:08:35.9390336Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9390767Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9391174Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9391378Z raise ValueError(msg) 2024-08-07T18:08:35.9391773Z ValueError: grad_query Test error 0.5072804925091745 is greater than threshold 1.4052779395874737e-05! 2024-08-07T18:08:35.9391790Z 2024-08-07T18:08:35.9392045Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9392977Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 2024-08-07T18:08:35.9392994Z 2024-08-07T18:08:35.9393283Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9394372Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 _ 2024-08-07T18:08:35.9394570Z Traceback (most recent call last): 2024-08-07T18:08:35.9395441Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9395568Z check_out_and_grad( 2024-08-07T18:08:35.9395954Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9396341Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9396696Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9396832Z raise ValueError(msg) 2024-08-07T18:08:35.9397238Z ValueError: grad_query Test error 0.3257364332675934 is greater than threshold 0.0005705282092094421! 2024-08-07T18:08:35.9397255Z 2024-08-07T18:08:35.9397502Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9398464Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 2024-08-07T18:08:35.9398498Z 2024-08-07T18:08:35.9398792Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9399837Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 _ 2024-08-07T18:08:35.9400050Z Traceback (most recent call last): 2024-08-07T18:08:35.9400618Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9400740Z check_out_and_grad( 2024-08-07T18:08:35.9401149Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 188, in check_out_and_grad 2024-08-07T18:08:35.9401284Z _check_equal( 2024-08-07T18:08:35.9401658Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9401795Z raise ValueError(msg) 2024-08-07T18:08:35.9402181Z ValueError: grad_attn_mask Test error 2414.253173828125 is greater than threshold 885.2333374023438! 2024-08-07T18:08:35.9402199Z 2024-08-07T18:08:35.9402458Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9403394Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 2024-08-07T18:08:35.9403413Z 2024-08-07T18:08:35.9403703Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9404843Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 _ 2024-08-07T18:08:35.9405065Z Traceback (most recent call last): 2024-08-07T18:08:35.9405644Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9405773Z check_out_and_grad( 2024-08-07T18:08:35.9406157Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9406562Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9406923Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9407058Z raise ValueError(msg) 2024-08-07T18:08:35.9407444Z ValueError: grad_query Test error 12.996935844421387 is greater than threshold 7.87343692779541! 2024-08-07T18:08:35.9407465Z 2024-08-07T18:08:35.9407768Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9408789Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 2024-08-07T18:08:35.9408807Z 2024-08-07T18:08:35.9409098Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9410132Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 _ 2024-08-07T18:08:35.9410302Z Traceback (most recent call last): 2024-08-07T18:08:35.9410861Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9411015Z check_out_and_grad( 2024-08-07T18:08:35.9411408Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9411787Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9412158Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9412294Z raise ValueError(msg) 2024-08-07T18:08:35.9412665Z ValueError: grad_query Test error 12.996935844421387 is greater than threshold 7.87343692779541! 2024-08-07T18:08:35.9412682Z 2024-08-07T18:08:35.9412941Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9413878Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 2024-08-07T18:08:35.9413895Z 2024-08-07T18:08:35.9414212Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9415294Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 _ 2024-08-07T18:08:35.9415449Z Traceback (most recent call last): 2024-08-07T18:08:35.9416030Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9416162Z check_out_and_grad( 2024-08-07T18:08:35.9416546Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9416937Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9417292Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9417445Z raise ValueError(msg) 2024-08-07T18:08:35.9417871Z ValueError: grad_query Test error 14.850714683532715 is greater than threshold 14.620260238647461! 2024-08-07T18:08:35.9417927Z 2024-08-07T18:08:35.9418182Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9419136Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 2024-08-07T18:08:35.9419154Z 2024-08-07T18:08:35.9419445Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9420538Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 _ 2024-08-07T18:08:35.9420700Z Traceback (most recent call last): 2024-08-07T18:08:35.9421255Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9421468Z check_out_and_grad( 2024-08-07T18:08:35.9421904Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9422280Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9422666Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9422804Z raise ValueError(msg) 2024-08-07T18:08:35.9423200Z ValueError: grad_query Test error 14.850714683532715 is greater than threshold 14.620260238647461! 2024-08-07T18:08:35.9423217Z 2024-08-07T18:08:35.9423462Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9424427Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 2024-08-07T18:08:35.9424440Z 2024-08-07T18:08:35.9424745Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9425780Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 _ 2024-08-07T18:08:35.9425947Z Traceback (most recent call last): 2024-08-07T18:08:35.9426506Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9426643Z check_out_and_grad( 2024-08-07T18:08:35.9427047Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9427427Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9427792Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9427956Z raise ValueError(msg) 2024-08-07T18:08:35.9428331Z ValueError: grad_query Test error 23.29561996459961 is greater than threshold 18.333898544311523! 2024-08-07T18:08:35.9428349Z 2024-08-07T18:08:35.9428589Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9429543Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 2024-08-07T18:08:35.9429560Z 2024-08-07T18:08:35.9429855Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9430917Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 _ 2024-08-07T18:08:35.9431115Z Traceback (most recent call last): 2024-08-07T18:08:35.9431720Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9431880Z check_out_and_grad( 2024-08-07T18:08:35.9432265Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9432656Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9433011Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9433148Z raise ValueError(msg) 2024-08-07T18:08:35.9433539Z ValueError: grad_query Test error 23.29561996459961 is greater than threshold 18.333898544311523! 2024-08-07T18:08:35.9433556Z 2024-08-07T18:08:35.9433796Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9434784Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 2024-08-07T18:08:35.9434858Z 2024-08-07T18:08:35.9435167Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9435345Z =================================== FAILURES =================================== 2024-08-07T18:08:35.9436395Z _ TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 _ 2024-08-07T18:08:35.9436544Z Traceback (most recent call last): 2024-08-07T18:08:35.9437103Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 2930, in test_mem_efficient_attention_attn_mask_vs_math_ref_grads 2024-08-07T18:08:35.9437252Z check_out_and_grad( 2024-08-07T18:08:35.9437639Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 184, in check_out_and_grad 2024-08-07T18:08:35.9438039Z _check_equal(ref_grad, lp_ref_grad, comp_grad, fudge_factors.get(name, default_fudge_factor), name) 2024-08-07T18:08:35.9438404Z File "/var/lib/jenkins/workspace/test/test_transformers.py", line 144, in _check_equal 2024-08-07T18:08:35.9438545Z raise ValueError(msg) 2024-08-07T18:08:35.9438931Z ValueError: grad_query Test error 23.29561996459961 is greater than threshold 18.333898544311523! 2024-08-07T18:08:35.9438951Z 2024-08-07T18:08:35.9439190Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9440126Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 2024-08-07T18:08:35.9440162Z 2024-08-07T18:08:35.9440458Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9441113Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_transformers/test_transformers-6a9eb05ef756150e.xml - 2024-08-07T18:08:35.9441497Z =========================== short test summary info ============================ 2024-08-07T18:08:35.9443944Z FAILED [0.0109s] test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 - ValueError: grad_query Test error 23.29561996459961 is greater than threshold 18.333898544311523! 2024-08-07T18:08:35.9443970Z 2024-08-07T18:08:35.9444229Z To execute this test, run the following from the base repo dir: 2024-08-07T18:08:35.9445160Z python test/test_transformers.py -k TestSDPACudaOnlyCUDA.test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 2024-08-07T18:08:35.9445178Z 2024-08-07T18:08:35.9445546Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 2024-08-07T18:08:35.9445969Z !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!! 2024-08-07T18:08:35.9446578Z ====== 1 failed, 207 passed, 37396 skipped, 12 rerun in 94.04s (0:01:34) ======= 2024-08-07T18:08:35.9446708Z Got exit code 1 2024-08-07T18:08:35.9446861Z Retrying single test... 2024-08-07T18:08:35.9447352Z Test results will be stored in test-reports/python-pytest/test_transformers/test_transformers-efb1627476a74b05.xml 2024-08-07T18:08:35.9447699Z ============================= test session starts ============================== 2024-08-07T18:08:35.9448071Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.5.0 -- /opt/conda/envs/py_3.10/bin/python 2024-08-07T18:08:35.9448212Z cachedir: .pytest_cache 2024-08-07T18:08:35.9448769Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2024-08-07T18:08:35.9448973Z rootdir: /var/lib/jenkins/workspace 2024-08-07T18:08:35.9449154Z configfile: pytest.ini 2024-08-07T18:08:35.9449590Z plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, xdist-3.3.1, xdoctest-1.1.0 2024-08-07T18:08:35.9449976Z collecting ... collected 45344 items / 45343 deselected / 1 selected 2024-08-07T18:08:35.9451153Z stepcurrent: skipping 37603 already run items. Running only test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 2024-08-07T18:08:35.9451324Z Running 1 items in this shard 2024-08-07T18:08:35.9451331Z 2024-08-07T18:08:35.9452634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.1318s] [100%] 2024-08-07T18:08:35.9452658Z 2024-08-07T18:08:35.9453341Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_transformers/test_transformers-efb1627476a74b05.xml - 2024-08-07T18:08:35.9453844Z ===================== 1 passed, 45343 deselected in 4.85s ====================== 2024-08-07T18:08:35.9454002Z Got exit code 0 2024-08-07T18:08:35.9454277Z Test succeeeded in new process, continuing with the rest of the tests 2024-08-07T18:08:35.9454782Z Test results will be stored in test-reports/python-pytest/test_transformers/test_transformers-68dbd8fab867c5cc.xml 2024-08-07T18:08:35.9455107Z ============================= test session starts ============================== 2024-08-07T18:08:35.9455479Z platform linux -- Python 3.10.14, pytest-7.3.2, pluggy-1.5.0 -- /opt/conda/envs/py_3.10/bin/python 2024-08-07T18:08:35.9455614Z cachedir: .pytest_cache 2024-08-07T18:08:35.9456183Z hypothesis profile 'pytorch_ci' -> database=None, max_examples=50, derandomize=True, suppress_health_check=[HealthCheck.too_slow] 2024-08-07T18:08:35.9456343Z rootdir: /var/lib/jenkins/workspace 2024-08-07T18:08:35.9456482Z configfile: pytest.ini 2024-08-07T18:08:35.9456920Z plugins: hypothesis-5.35.1, cpp-2.3.0, flakefinder-1.1.0, rerunfailures-14.0, xdist-3.3.1, xdoctest-1.1.0 2024-08-07T18:08:35.9457332Z collecting ... collected 45344 items / 37604 deselected / 7740 selected 2024-08-07T18:08:35.9457534Z stepcurrent: skipping 37604 already run items. 2024-08-07T18:08:35.9457677Z Running 7740 items in this shard 2024-08-07T18:08:35.9457693Z 2024-08-07T18:08:35.9458994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.1395s] [ 0%] 2024-08-07T18:08:35.9460349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0139s] [ 0%] 2024-08-07T18:08:35.9461802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 [W807 18:06:20.730939456 attention.cpp:797] Warning: Dropout mask should only be used for testing purposes. (function operator()) 2024-08-07T18:08:35.9462044Z PASSED [0.1157s] [ 0%] 2024-08-07T18:08:35.9463328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0289s] [ 0%] 2024-08-07T18:08:35.9464674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0151s] [ 0%] 2024-08-07T18:08:35.9466033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0127s] [ 0%] 2024-08-07T18:08:35.9467307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0165s] [ 0%] 2024-08-07T18:08:35.9468603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0168s] [ 0%] 2024-08-07T18:08:35.9469882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0113s] [ 0%] 2024-08-07T18:08:35.9471186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0107s] [ 0%] 2024-08-07T18:08:35.9472463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0237s] [ 0%] 2024-08-07T18:08:35.9473762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0242s] [ 0%] 2024-08-07T18:08:35.9475042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0126s] [ 0%] 2024-08-07T18:08:35.9476345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0114s] [ 0%] 2024-08-07T18:08:35.9477606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0133s] [ 0%] 2024-08-07T18:08:35.9478927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 0%] 2024-08-07T18:08:35.9480323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 0%] 2024-08-07T18:08:35.9481603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 0%] 2024-08-07T18:08:35.9482932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0104s] [ 0%] 2024-08-07T18:08:35.9484260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 0%] 2024-08-07T18:08:35.9485574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 0%] 2024-08-07T18:08:35.9486853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 0%] 2024-08-07T18:08:35.9488133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0085s] [ 0%] 2024-08-07T18:08:35.9489411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 0%] 2024-08-07T18:08:35.9490693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0082s] [ 0%] 2024-08-07T18:08:35.9491964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 0%] 2024-08-07T18:08:35.9493239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0108s] [ 0%] 2024-08-07T18:08:35.9494538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 0%] 2024-08-07T18:08:35.9496119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 0%] 2024-08-07T18:08:35.9497528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 0%] 2024-08-07T18:08:35.9498874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0096s] [ 0%] 2024-08-07T18:08:35.9500166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 0%] 2024-08-07T18:08:35.9501431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0076s] [ 0%] 2024-08-07T18:08:35.9502778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 0%] 2024-08-07T18:08:35.9504114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0102s] [ 0%] 2024-08-07T18:08:35.9505394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0101s] [ 0%] 2024-08-07T18:08:35.9506696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0085s] [ 0%] 2024-08-07T18:08:35.9507977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 0%] 2024-08-07T18:08:35.9509263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0075s] [ 0%] 2024-08-07T18:08:35.9510528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 0%] 2024-08-07T18:08:35.9511810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0073s] [ 0%] 2024-08-07T18:08:35.9513088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 0%] 2024-08-07T18:08:35.9514370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0095s] [ 0%] 2024-08-07T18:08:35.9515650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0096s] [ 0%] 2024-08-07T18:08:35.9516975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0081s] [ 0%] 2024-08-07T18:08:35.9518298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 0%] 2024-08-07T18:08:35.9519563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 0%] 2024-08-07T18:08:35.9520931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 0%] 2024-08-07T18:08:35.9522252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 0%] 2024-08-07T18:08:35.9523539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 0%] 2024-08-07T18:08:35.9524805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0085s] [ 0%] 2024-08-07T18:08:35.9526125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 0%] 2024-08-07T18:08:35.9527395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 0%] 2024-08-07T18:08:35.9528682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 0%] 2024-08-07T18:08:35.9529947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 0%] 2024-08-07T18:08:35.9531220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 0%] 2024-08-07T18:08:35.9532505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 0%] 2024-08-07T18:08:35.9533771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 0%] 2024-08-07T18:08:35.9535097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 0%] 2024-08-07T18:08:35.9536442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 0%] 2024-08-07T18:08:35.9537723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0079s] [ 0%] 2024-08-07T18:08:35.9538996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 0%] 2024-08-07T18:08:35.9540318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 0%] 2024-08-07T18:08:35.9541638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 0%] 2024-08-07T18:08:35.9542914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 0%] 2024-08-07T18:08:35.9544184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 0%] 2024-08-07T18:08:35.9545455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0088s] [ 0%] 2024-08-07T18:08:35.9546760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 0%] 2024-08-07T18:08:35.9548027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 0%] 2024-08-07T18:08:35.9549321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 0%] 2024-08-07T18:08:35.9550587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 0%] 2024-08-07T18:08:35.9551872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 0%] 2024-08-07T18:08:35.9553129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 0%] 2024-08-07T18:08:35.9554474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 0%] 2024-08-07T18:08:35.9555807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 0%] 2024-08-07T18:08:35.9557075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 0%] 2024-08-07T18:08:35.9558406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 0%] 2024-08-07T18:08:35.9559735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 1%] 2024-08-07T18:08:35.9561028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0194s] [ 1%] 2024-08-07T18:08:35.9562307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0211s] [ 1%] 2024-08-07T18:08:35.9563597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0100s] [ 1%] 2024-08-07T18:08:35.9564885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0095s] [ 1%] 2024-08-07T18:08:35.9566205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0254s] [ 1%] 2024-08-07T18:08:35.9567486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0266s] [ 1%] 2024-08-07T18:08:35.9568786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0106s] [ 1%] 2024-08-07T18:08:35.9570079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0107s] [ 1%] 2024-08-07T18:08:35.9571351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0183s] [ 1%] 2024-08-07T18:08:35.9572692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0201s] [ 1%] 2024-08-07T18:08:35.9574019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0101s] [ 1%] 2024-08-07T18:08:35.9575314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0114s] [ 1%] 2024-08-07T18:08:35.9576610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0238s] [ 1%] 2024-08-07T18:08:35.9577951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0246s] [ 1%] 2024-08-07T18:08:35.9579287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0103s] [ 1%] 2024-08-07T18:08:35.9580577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0117s] [ 1%] 2024-08-07T18:08:35.9581853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0184s] [ 1%] 2024-08-07T18:08:35.9583137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0204s] [ 1%] 2024-08-07T18:08:35.9584431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0101s] [ 1%] 2024-08-07T18:08:35.9585716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0101s] [ 1%] 2024-08-07T18:08:35.9587013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0251s] [ 1%] 2024-08-07T18:08:35.9588304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0260s] [ 1%] 2024-08-07T18:08:35.9589592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0107s] [ 1%] 2024-08-07T18:08:35.9590916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0108s] [ 1%] 2024-08-07T18:08:35.9592257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0160s] [ 1%] 2024-08-07T18:08:35.9593534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0169s] [ 1%] 2024-08-07T18:08:35.9594822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0090s] [ 1%] 2024-08-07T18:08:35.9596443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 1%] 2024-08-07T18:08:35.9597808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0232s] [ 1%] 2024-08-07T18:08:35.9599106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0241s] [ 1%] 2024-08-07T18:08:35.9600377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0099s] [ 1%] 2024-08-07T18:08:35.9601679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0098s] [ 1%] 2024-08-07T18:08:35.9602957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 1%] 2024-08-07T18:08:35.9604249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 1%] 2024-08-07T18:08:35.9605525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 1%] 2024-08-07T18:08:35.9606848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 1%] 2024-08-07T18:08:35.9608120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0079s] [ 1%] 2024-08-07T18:08:35.9609397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 1%] 2024-08-07T18:08:35.9610748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 1%] 2024-08-07T18:08:35.9612099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 1%] 2024-08-07T18:08:35.9613388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 1%] 2024-08-07T18:08:35.9614668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 1%] 2024-08-07T18:08:35.9616011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 1%] 2024-08-07T18:08:35.9617356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 1%] 2024-08-07T18:08:35.9618650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0080s] [ 1%] 2024-08-07T18:08:35.9619969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 1%] 2024-08-07T18:08:35.9621281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 1%] 2024-08-07T18:08:35.9622555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 1%] 2024-08-07T18:08:35.9623822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 1%] 2024-08-07T18:08:35.9625120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 1%] 2024-08-07T18:08:35.9626418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 1%] 2024-08-07T18:08:35.9627716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 1%] 2024-08-07T18:08:35.9629034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 1%] 2024-08-07T18:08:35.9630386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 1%] 2024-08-07T18:08:35.9631679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 1%] 2024-08-07T18:08:35.9632958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 1%] 2024-08-07T18:08:35.9634269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 1%] 2024-08-07T18:08:35.9635593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 1%] 2024-08-07T18:08:35.9636893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 1%] 2024-08-07T18:08:35.9638155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 1%] 2024-08-07T18:08:35.9639449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0079s] [ 1%] 2024-08-07T18:08:35.9640734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 1%] 2024-08-07T18:08:35.9642015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 1%] 2024-08-07T18:08:35.9643296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 1%] 2024-08-07T18:08:35.9644595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 1%] 2024-08-07T18:08:35.9645867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 1%] 2024-08-07T18:08:35.9647166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 1%] 2024-08-07T18:08:35.9648488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 1%] 2024-08-07T18:08:35.9649819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 1%] 2024-08-07T18:08:35.9651112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 1%] 2024-08-07T18:08:35.9652425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 1%] 2024-08-07T18:08:35.9653788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 1%] 2024-08-07T18:08:35.9655107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 1%] 2024-08-07T18:08:35.9656407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 1%] 2024-08-07T18:08:35.9657683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 1%] 2024-08-07T18:08:35.9658981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 1%] 2024-08-07T18:08:35.9660245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0079s] [ 2%] 2024-08-07T18:08:35.9661519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 2%] 2024-08-07T18:08:35.9662806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 2%] 2024-08-07T18:08:35.9664094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 2%] 2024-08-07T18:08:35.9665379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 2%] 2024-08-07T18:08:35.9666719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 2%] 2024-08-07T18:08:35.9668054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 2%] 2024-08-07T18:08:35.9669330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 2%] 2024-08-07T18:08:35.9670615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 2%] 2024-08-07T18:08:35.9671933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 2%] 2024-08-07T18:08:35.9673286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 2%] 2024-08-07T18:08:35.9674560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 2%] 2024-08-07T18:08:35.9675822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 2%] 2024-08-07T18:08:35.9677133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 2%] 2024-08-07T18:08:35.9678408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 2%] 2024-08-07T18:08:35.9679691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 2%] 2024-08-07T18:08:35.9680991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 2%] 2024-08-07T18:08:35.9682267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 2%] 2024-08-07T18:08:35.9683535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 2%] 2024-08-07T18:08:35.9684827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 2%] 2024-08-07T18:08:35.9686172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0084s] [ 2%] 2024-08-07T18:08:35.9687508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 2%] 2024-08-07T18:08:35.9688798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 2%] 2024-08-07T18:08:35.9690085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 2%] 2024-08-07T18:08:35.9691439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0112s] [ 2%] 2024-08-07T18:08:35.9692777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0116s] [ 2%] 2024-08-07T18:08:35.9694073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 2%] 2024-08-07T18:08:35.9695620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 2%] 2024-08-07T18:08:35.9696961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0086s] [ 2%] 2024-08-07T18:08:35.9698238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 2%] 2024-08-07T18:08:35.9699526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 2%] 2024-08-07T18:08:35.9700811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 2%] 2024-08-07T18:08:35.9702094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0112s] [ 2%] 2024-08-07T18:08:35.9703395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0115s] [ 2%] 2024-08-07T18:08:35.9704745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 2%] 2024-08-07T18:08:35.9706126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 2%] 2024-08-07T18:08:35.9707414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0094s] [ 2%] 2024-08-07T18:08:35.9708713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 2%] 2024-08-07T18:08:35.9710070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 2%] 2024-08-07T18:08:35.9711447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 2%] 2024-08-07T18:08:35.9712718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0117s] [ 2%] 2024-08-07T18:08:35.9713997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0122s] [ 2%] 2024-08-07T18:08:35.9715299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 2%] 2024-08-07T18:08:35.9716594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 2%] 2024-08-07T18:08:35.9717888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 2%] 2024-08-07T18:08:35.9719167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 2%] 2024-08-07T18:08:35.9720508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 2%] 2024-08-07T18:08:35.9721785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 2%] 2024-08-07T18:08:35.9723071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0111s] [ 2%] 2024-08-07T18:08:35.9724396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 2%] 2024-08-07T18:08:35.9725749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 2%] 2024-08-07T18:08:35.9727048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 2%] 2024-08-07T18:08:35.9728369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0121s] [ 2%] 2024-08-07T18:08:35.9736418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0127s] [ 2%] 2024-08-07T18:08:35.9737854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0074s] [ 2%] 2024-08-07T18:08:35.9739150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 2%] 2024-08-07T18:08:35.9740455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0166s] [ 2%] 2024-08-07T18:08:35.9741769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0172s] [ 2%] 2024-08-07T18:08:35.9743043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 2%] 2024-08-07T18:08:35.9744369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 2%] 2024-08-07T18:08:35.9745652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0124s] [ 2%] 2024-08-07T18:08:35.9746948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0138s] [ 2%] 2024-08-07T18:08:35.9748216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0075s] [ 2%] 2024-08-07T18:08:35.9749631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 2%] 2024-08-07T18:08:35.9750989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0168s] [ 2%] 2024-08-07T18:08:35.9752271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0173s] [ 2%] 2024-08-07T18:08:35.9753559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0081s] [ 2%] 2024-08-07T18:08:35.9754897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 2%] 2024-08-07T18:08:35.9756246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0138s] [ 2%] 2024-08-07T18:08:35.9757525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0151s] [ 2%] 2024-08-07T18:08:35.9758819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0080s] [ 2%] 2024-08-07T18:08:35.9760104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 2%] 2024-08-07T18:08:35.9761397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0183s] [ 2%] 2024-08-07T18:08:35.9762678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0184s] [ 2%] 2024-08-07T18:08:35.9763994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 2%] 2024-08-07T18:08:35.9765286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 2%] 2024-08-07T18:08:35.9766551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0115s] [ 2%] 2024-08-07T18:08:35.9767887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0120s] [ 2%] 2024-08-07T18:08:35.9769233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 3%] 2024-08-07T18:08:35.9770530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 3%] 2024-08-07T18:08:35.9771797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0164s] [ 3%] 2024-08-07T18:08:35.9773136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0165s] [ 3%] 2024-08-07T18:08:35.9774489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0081s] [ 3%] 2024-08-07T18:08:35.9775780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 3%] 2024-08-07T18:08:35.9777046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 3%] 2024-08-07T18:08:35.9778331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 3%] 2024-08-07T18:08:35.9779616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 3%] 2024-08-07T18:08:35.9780890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 3%] 2024-08-07T18:08:35.9782179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0085s] [ 3%] 2024-08-07T18:08:35.9783461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 3%] 2024-08-07T18:08:35.9784767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 3%] 2024-08-07T18:08:35.9786043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 3%] 2024-08-07T18:08:35.9787411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 3%] 2024-08-07T18:08:35.9788739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 3%] 2024-08-07T18:08:35.9790020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 3%] 2024-08-07T18:08:35.9791294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 3%] 2024-08-07T18:08:35.9792607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 3%] 2024-08-07T18:08:35.9793965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 3%] 2024-08-07T18:08:35.9795540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 3%] 2024-08-07T18:08:35.9796865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 3%] 2024-08-07T18:08:35.9798144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0073s] [ 3%] 2024-08-07T18:08:35.9799427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 3%] 2024-08-07T18:08:35.9800690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 3%] 2024-08-07T18:08:35.9801982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 3%] 2024-08-07T18:08:35.9803259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0090s] [ 3%] 2024-08-07T18:08:35.9804573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 3%] 2024-08-07T18:08:35.9805953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 3%] 2024-08-07T18:08:35.9807306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 3%] 2024-08-07T18:08:35.9808597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 3%] 2024-08-07T18:08:35.9809862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 3%] 2024-08-07T18:08:35.9811204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 3%] 2024-08-07T18:08:35.9812546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 3%] 2024-08-07T18:08:35.9813826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0084s] [ 3%] 2024-08-07T18:08:35.9815115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 3%] 2024-08-07T18:08:35.9816410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 3%] 2024-08-07T18:08:35.9817687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 3%] 2024-08-07T18:08:35.9818952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 3%] 2024-08-07T18:08:35.9820293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 3%] 2024-08-07T18:08:35.9821572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 3%] 2024-08-07T18:08:35.9822856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 3%] 2024-08-07T18:08:35.9824135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0080s] [ 3%] 2024-08-07T18:08:35.9825472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 3%] 2024-08-07T18:08:35.9826794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 3%] 2024-08-07T18:08:35.9828087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 3%] 2024-08-07T18:08:35.9829344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 3%] 2024-08-07T18:08:35.9830673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 3%] 2024-08-07T18:08:35.9831988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 3%] 2024-08-07T18:08:35.9833255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 3%] 2024-08-07T18:08:35.9834563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 3%] 2024-08-07T18:08:35.9835858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 3%] 2024-08-07T18:08:35.9837129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 3%] 2024-08-07T18:08:35.9838399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 3%] 2024-08-07T18:08:35.9839683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0073s] [ 3%] 2024-08-07T18:08:35.9840960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 3%] 2024-08-07T18:08:35.9842235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 3%] 2024-08-07T18:08:35.9843565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 3%] 2024-08-07T18:08:35.9844911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0086s] [ 3%] 2024-08-07T18:08:35.9846228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 3%] 2024-08-07T18:08:35.9847469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 3%] 2024-08-07T18:08:35.9848808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 3%] 2024-08-07T18:08:35.9850123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 3%] 2024-08-07T18:08:35.9851407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 3%] 2024-08-07T18:08:35.9852669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 3%] 2024-08-07T18:08:35.9853953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 3%] 2024-08-07T18:08:35.9855248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 3%] 2024-08-07T18:08:35.9856518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 3%] 2024-08-07T18:08:35.9857810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 3%] 2024-08-07T18:08:35.9859087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 3%] 2024-08-07T18:08:35.9860375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0201s] [ 3%] 2024-08-07T18:08:35.9861651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0218s] [ 3%] 2024-08-07T18:08:35.9862990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0097s] [ 3%] 2024-08-07T18:08:35.9864340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0096s] [ 3%] 2024-08-07T18:08:35.9865639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0287s] [ 3%] 2024-08-07T18:08:35.9866920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0297s] [ 3%] 2024-08-07T18:08:35.9868261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0104s] [ 3%] 2024-08-07T18:08:35.9869596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0107s] [ 4%] 2024-08-07T18:08:35.9870865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0212s] [ 4%] 2024-08-07T18:08:35.9872167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0232s] [ 4%] 2024-08-07T18:08:35.9873449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0101s] [ 4%] 2024-08-07T18:08:35.9874765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0100s] [ 4%] 2024-08-07T18:08:35.9876037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0289s] [ 4%] 2024-08-07T18:08:35.9877338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0301s] [ 4%] 2024-08-07T18:08:35.9878617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0113s] [ 4%] 2024-08-07T18:08:35.9879915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0111s] [ 4%] 2024-08-07T18:08:35.9881227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0239s] [ 4%] 2024-08-07T18:08:35.9882573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0260s] [ 4%] 2024-08-07T18:08:35.9883843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0111s] [ 4%] 2024-08-07T18:08:35.9885134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0111s] [ 4%] 2024-08-07T18:08:35.9886473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0316s] [ 4%] 2024-08-07T18:08:35.9887812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0325s] [ 4%] 2024-08-07T18:08:35.9889102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0118s] [ 4%] 2024-08-07T18:08:35.9890380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0120s] [ 4%] 2024-08-07T18:08:35.9891672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0190s] [ 4%] 2024-08-07T18:08:35.9892950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0207s] [ 4%] 2024-08-07T18:08:35.9894246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0095s] [ 4%] 2024-08-07T18:08:35.9895831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 4%] 2024-08-07T18:08:35.9897133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0281s] [ 4%] 2024-08-07T18:08:35.9898429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0288s] [ 4%] 2024-08-07T18:08:35.9899694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0102s] [ 4%] 2024-08-07T18:08:35.9901083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0105s] [ 4%] 2024-08-07T18:08:35.9902429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 4%] 2024-08-07T18:08:35.9903720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 4%] 2024-08-07T18:08:35.9905009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 4%] 2024-08-07T18:08:35.9906360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 4%] 2024-08-07T18:08:35.9907713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 4%] 2024-08-07T18:08:35.9909006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 4%] 2024-08-07T18:08:35.9910281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 4%] 2024-08-07T18:08:35.9911577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 4%] 2024-08-07T18:08:35.9912844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 4%] 2024-08-07T18:08:35.9914115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 4%] 2024-08-07T18:08:35.9915428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 4%] 2024-08-07T18:08:35.9916722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 4%] 2024-08-07T18:08:35.9917993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0085s] [ 4%] 2024-08-07T18:08:35.9919314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 4%] 2024-08-07T18:08:35.9920691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 4%] 2024-08-07T18:08:35.9921966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 4%] 2024-08-07T18:08:35.9923234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 4%] 2024-08-07T18:08:35.9924574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 4%] 2024-08-07T18:08:35.9925900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 4%] 2024-08-07T18:08:35.9927184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 4%] 2024-08-07T18:08:35.9928455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0088s] [ 4%] 2024-08-07T18:08:35.9929755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 4%] 2024-08-07T18:08:35.9931028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 4%] 2024-08-07T18:08:35.9932319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 4%] 2024-08-07T18:08:35.9933579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 4%] 2024-08-07T18:08:35.9934894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 4%] 2024-08-07T18:08:35.9936159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 4%] 2024-08-07T18:08:35.9937425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 4%] 2024-08-07T18:08:35.9938749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 4%] 2024-08-07T18:08:35.9940074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 4%] 2024-08-07T18:08:35.9941347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 4%] 2024-08-07T18:08:35.9942665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 4%] 2024-08-07T18:08:35.9944006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 4%] 2024-08-07T18:08:35.9945298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 4%] 2024-08-07T18:08:35.9946573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 4%] 2024-08-07T18:08:35.9947869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 4%] 2024-08-07T18:08:35.9949122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0080s] [ 4%] 2024-08-07T18:08:35.9950405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 4%] 2024-08-07T18:08:35.9951695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 4%] 2024-08-07T18:08:35.9953011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 4%] 2024-08-07T18:08:35.9954285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 4%] 2024-08-07T18:08:35.9955576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 4%] 2024-08-07T18:08:35.9956887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 4%] 2024-08-07T18:08:35.9958229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 4%] 2024-08-07T18:08:35.9959479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 4%] 2024-08-07T18:08:35.9960754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 4%] 2024-08-07T18:08:35.9962063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 4%] 2024-08-07T18:08:35.9963387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 4%] 2024-08-07T18:08:35.9964669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 4%] 2024-08-07T18:08:35.9965938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 4%] 2024-08-07T18:08:35.9967216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 4%] 2024-08-07T18:08:35.9968488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 4%] 2024-08-07T18:08:35.9969763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0088s] [ 5%] 2024-08-07T18:08:35.9971034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 5%] 2024-08-07T18:08:35.9972319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 5%] 2024-08-07T18:08:35.9973592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 5%] 2024-08-07T18:08:35.9974868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 5%] 2024-08-07T18:08:35.9976203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 5%] 2024-08-07T18:08:35.9977509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 5%] 2024-08-07T18:08:35.9978786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 5%] 2024-08-07T18:08:35.9980103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 5%] 2024-08-07T18:08:35.9981443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 5%] 2024-08-07T18:08:35.9982696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 5%] 2024-08-07T18:08:35.9983976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 5%] 2024-08-07T18:08:35.9985269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0097s] [ 5%] 2024-08-07T18:08:35.9986560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 5%] 2024-08-07T18:08:35.9987826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 5%] 2024-08-07T18:08:35.9989099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 5%] 2024-08-07T18:08:35.9990389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0136s] [ 5%] 2024-08-07T18:08:35.9991673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0139s] [ 5%] 2024-08-07T18:08:35.9992965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 5%] 2024-08-07T18:08:35.9994288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 5%] 2024-08-07T18:08:35.9995940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0097s] [ 5%] 2024-08-07T18:08:35.9997233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 5%] 2024-08-07T18:08:35.9998515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0073s] [ 5%] 2024-08-07T18:08:35.9999862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 5%] 2024-08-07T18:08:36.0001208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0138s] [ 5%] 2024-08-07T18:08:36.0002495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0138s] [ 5%] 2024-08-07T18:08:36.0003769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 5%] 2024-08-07T18:08:36.0005094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 5%] 2024-08-07T18:08:36.0006366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0105s] [ 5%] 2024-08-07T18:08:36.0007649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0115s] [ 5%] 2024-08-07T18:08:36.0008921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0073s] [ 5%] 2024-08-07T18:08:36.0010208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 5%] 2024-08-07T18:08:36.0011477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0147s] [ 5%] 2024-08-07T18:08:36.0012824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0144s] [ 5%] 2024-08-07T18:08:36.0014160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0081s] [ 5%] 2024-08-07T18:08:36.0015460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 5%] 2024-08-07T18:08:36.0016786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0094s] [ 5%] 2024-08-07T18:08:36.0018103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 5%] 2024-08-07T18:08:36.0019439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 5%] 2024-08-07T18:08:36.0020747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 5%] 2024-08-07T18:08:36.0022036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0138s] [ 5%] 2024-08-07T18:08:36.0023318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0141s] [ 5%] 2024-08-07T18:08:36.0024604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 5%] 2024-08-07T18:08:36.0025892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 5%] 2024-08-07T18:08:36.0027166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0149s] [ 5%] 2024-08-07T18:08:36.0028457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0160s] [ 5%] 2024-08-07T18:08:36.0029723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 5%] 2024-08-07T18:08:36.0031003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 5%] 2024-08-07T18:08:36.0032322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0220s] [ 5%] 2024-08-07T18:08:36.0033671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0227s] [ 5%] 2024-08-07T18:08:36.0034959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0091s] [ 5%] 2024-08-07T18:08:36.0036255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 5%] 2024-08-07T18:08:36.0037570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0151s] [ 5%] 2024-08-07T18:08:36.0038905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0170s] [ 5%] 2024-08-07T18:08:36.0040163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0082s] [ 5%] 2024-08-07T18:08:36.0041435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 5%] 2024-08-07T18:08:36.0042729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0225s] [ 5%] 2024-08-07T18:08:36.0044009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0230s] [ 5%] 2024-08-07T18:08:36.0045317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 5%] 2024-08-07T18:08:36.0046600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 5%] 2024-08-07T18:08:36.0047889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0168s] [ 5%] 2024-08-07T18:08:36.0049161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0185s] [ 5%] 2024-08-07T18:08:36.0050481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0088s] [ 5%] 2024-08-07T18:08:36.0051821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 5%] 2024-08-07T18:08:36.0053085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0237s] [ 5%] 2024-08-07T18:08:36.0054371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0243s] [ 5%] 2024-08-07T18:08:36.0055701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0100s] [ 5%] 2024-08-07T18:08:36.0057048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0099s] [ 5%] 2024-08-07T18:08:36.0058310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0142s] [ 5%] 2024-08-07T18:08:36.0059594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0152s] [ 5%] 2024-08-07T18:08:36.0060857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 5%] 2024-08-07T18:08:36.0062148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 5%] 2024-08-07T18:08:36.0063413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0218s] [ 5%] 2024-08-07T18:08:36.0064718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0223s] [ 5%] 2024-08-07T18:08:36.0065999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0092s] [ 5%] 2024-08-07T18:08:36.0067270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 5%] 2024-08-07T18:08:36.0068543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0074s] [ 5%] 2024-08-07T18:08:36.0069860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 5%] 2024-08-07T18:08:36.0071187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 6%] 2024-08-07T18:08:36.0072459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 6%] 2024-08-07T18:08:36.0073739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0099s] [ 6%] 2024-08-07T18:08:36.0075086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 6%] 2024-08-07T18:08:36.0076460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 6%] 2024-08-07T18:08:36.0077738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 6%] 2024-08-07T18:08:36.0079009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 6%] 2024-08-07T18:08:36.0080305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 6%] 2024-08-07T18:08:36.0081564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 6%] 2024-08-07T18:08:36.0082844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 6%] 2024-08-07T18:08:36.0084122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0099s] [ 6%] 2024-08-07T18:08:36.0085429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0103s] [ 6%] 2024-08-07T18:08:36.0086691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 6%] 2024-08-07T18:08:36.0088024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 6%] 2024-08-07T18:08:36.0089381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 6%] 2024-08-07T18:08:36.0090681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 6%] 2024-08-07T18:08:36.0091944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0073s] [ 6%] 2024-08-07T18:08:36.0093263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 6%] 2024-08-07T18:08:36.0094618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0106s] [ 6%] 2024-08-07T18:08:36.0096150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0104s] [ 6%] 2024-08-07T18:08:36.0097452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 6%] 2024-08-07T18:08:36.0098738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 6%] 2024-08-07T18:08:36.0100022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0075s] [ 6%] 2024-08-07T18:08:36.0101279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 6%] 2024-08-07T18:08:36.0102555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 6%] 2024-08-07T18:08:36.0103824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 6%] 2024-08-07T18:08:36.0105113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0098s] [ 6%] 2024-08-07T18:08:36.0106404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 6%] 2024-08-07T18:08:36.0107751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 6%] 2024-08-07T18:08:36.0109118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 6%] 2024-08-07T18:08:36.0110370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 6%] 2024-08-07T18:08:36.0111711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 6%] 2024-08-07T18:08:36.0113041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 6%] 2024-08-07T18:08:36.0114323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 6%] 2024-08-07T18:08:36.0115599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0092s] [ 6%] 2024-08-07T18:08:36.0116878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 6%] 2024-08-07T18:08:36.0118147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 6%] 2024-08-07T18:08:36.0119414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 6%] 2024-08-07T18:08:36.0120728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 6%] 2024-08-07T18:08:36.0122006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 6%] 2024-08-07T18:08:36.0123287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 6%] 2024-08-07T18:08:36.0124552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 6%] 2024-08-07T18:08:36.0125898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0093s] [ 6%] 2024-08-07T18:08:36.0127225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 6%] 2024-08-07T18:08:36.0128499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 6%] 2024-08-07T18:08:36.0129760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 6%] 2024-08-07T18:08:36.0131065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0075s] [ 6%] 2024-08-07T18:08:36.0132408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 6%] 2024-08-07T18:08:36.0133664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 6%] 2024-08-07T18:08:36.0134964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 6%] 2024-08-07T18:08:36.0136239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0098s] [ 6%] 2024-08-07T18:08:36.0137529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0099s] [ 6%] 2024-08-07T18:08:36.0138795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 6%] 2024-08-07T18:08:36.0140083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 6%] 2024-08-07T18:08:36.0141353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 6%] 2024-08-07T18:08:36.0142630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 6%] 2024-08-07T18:08:36.0143889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 6%] 2024-08-07T18:08:36.0145226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 6%] 2024-08-07T18:08:36.0146567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0090s] [ 6%] 2024-08-07T18:08:36.0147832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 6%] 2024-08-07T18:08:36.0149151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 6%] 2024-08-07T18:08:36.0150475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 6%] 2024-08-07T18:08:36.0151755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0262s] [ 6%] 2024-08-07T18:08:36.0153033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0290s] [ 6%] 2024-08-07T18:08:36.0154319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0112s] [ 6%] 2024-08-07T18:08:36.0155622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0113s] [ 6%] 2024-08-07T18:08:36.0156891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0402s] [ 6%] 2024-08-07T18:08:36.0158183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0421s] [ 6%] 2024-08-07T18:08:36.0159460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0124s] [ 6%] 2024-08-07T18:08:36.0160757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0123s] [ 6%] 2024-08-07T18:08:36.0162027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0270s] [ 6%] 2024-08-07T18:08:36.0163359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0306s] [ 6%] 2024-08-07T18:08:36.0164680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0120s] [ 6%] 2024-08-07T18:08:36.0165968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0119s] [ 6%] 2024-08-07T18:08:36.0167238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0405s] [ 6%] 2024-08-07T18:08:36.0168567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0425s] [ 6%] 2024-08-07T18:08:36.0169950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0127s] [ 6%] 2024-08-07T18:08:36.0171224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0127s] [ 7%] 2024-08-07T18:08:36.0172519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0299s] [ 7%] 2024-08-07T18:08:36.0173806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0337s] [ 7%] 2024-08-07T18:08:36.0175119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0128s] [ 7%] 2024-08-07T18:08:36.0176399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0124s] [ 7%] 2024-08-07T18:08:36.0177691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0427s] [ 7%] 2024-08-07T18:08:36.0178983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0445s] [ 7%] 2024-08-07T18:08:36.0180274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0141s] [ 7%] 2024-08-07T18:08:36.0181592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0140s] [ 7%] 2024-08-07T18:08:36.0182933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0253s] [ 7%] 2024-08-07T18:08:36.0184213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0276s] [ 7%] 2024-08-07T18:08:36.0185500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0111s] [ 7%] 2024-08-07T18:08:36.0186834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0109s] [ 7%] 2024-08-07T18:08:36.0188181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0400s] [ 7%] 2024-08-07T18:08:36.0189464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0415s] [ 7%] 2024-08-07T18:08:36.0190728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0122s] [ 7%] 2024-08-07T18:08:36.0192021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0122s] [ 7%] 2024-08-07T18:08:36.0193313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0074s] [ 7%] 2024-08-07T18:08:36.0194625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 7%] 2024-08-07T18:08:36.0196175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 7%] 2024-08-07T18:08:36.0197474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 7%] 2024-08-07T18:08:36.0198773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0098s] [ 7%] 2024-08-07T18:08:36.0200050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 7%] 2024-08-07T18:08:36.0201418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 7%] 2024-08-07T18:08:36.0202771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 7%] 2024-08-07T18:08:36.0204064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0074s] [ 7%] 2024-08-07T18:08:36.0205365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 7%] 2024-08-07T18:08:36.0206719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 7%] 2024-08-07T18:08:36.0208065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 7%] 2024-08-07T18:08:36.0209358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0099s] [ 7%] 2024-08-07T18:08:36.0210649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 7%] 2024-08-07T18:08:36.0211935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 7%] 2024-08-07T18:08:36.0213233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 7%] 2024-08-07T18:08:36.0214501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 7%] 2024-08-07T18:08:36.0215886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 7%] 2024-08-07T18:08:36.0217164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 7%] 2024-08-07T18:08:36.0218462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 7%] 2024-08-07T18:08:36.0219780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0102s] [ 7%] 2024-08-07T18:08:36.0221176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0100s] [ 7%] 2024-08-07T18:08:36.0222466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 7%] 2024-08-07T18:08:36.0223740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 7%] 2024-08-07T18:08:36.0225072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 7%] 2024-08-07T18:08:36.0226418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 7%] 2024-08-07T18:08:36.0227705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 7%] 2024-08-07T18:08:36.0228976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 7%] 2024-08-07T18:08:36.0230271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0096s] [ 7%] 2024-08-07T18:08:36.0231560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 7%] 2024-08-07T18:08:36.0232848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 7%] 2024-08-07T18:08:36.0234128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 7%] 2024-08-07T18:08:36.0235432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 7%] 2024-08-07T18:08:36.0236703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 7%] 2024-08-07T18:08:36.0237971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 7%] 2024-08-07T18:08:36.0239307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 7%] 2024-08-07T18:08:36.0240629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0094s] [ 7%] 2024-08-07T18:08:36.0241925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 7%] 2024-08-07T18:08:36.0243201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 7%] 2024-08-07T18:08:36.0244553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 7%] 2024-08-07T18:08:36.0245885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 7%] 2024-08-07T18:08:36.0247177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 7%] 2024-08-07T18:08:36.0248444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0071s] [ 7%] 2024-08-07T18:08:36.0249726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 7%] 2024-08-07T18:08:36.0251013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0096s] [ 7%] 2024-08-07T18:08:36.0252286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0096s] [ 7%] 2024-08-07T18:08:36.0253640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 7%] 2024-08-07T18:08:36.0254930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 7%] 2024-08-07T18:08:36.0256231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 7%] 2024-08-07T18:08:36.0257549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 7%] 2024-08-07T18:08:36.0258887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 7%] 2024-08-07T18:08:36.0260159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 7%] 2024-08-07T18:08:36.0261445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0101s] [ 7%] 2024-08-07T18:08:36.0262787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0101s] [ 7%] 2024-08-07T18:08:36.0264116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 7%] 2024-08-07T18:08:36.0265424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 7%] 2024-08-07T18:08:36.0266691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 7%] 2024-08-07T18:08:36.0267986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 7%] 2024-08-07T18:08:36.0269264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 7%] 2024-08-07T18:08:36.0270555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 7%] 2024-08-07T18:08:36.0271831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0093s] [ 7%] 2024-08-07T18:08:36.0273137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 8%] 2024-08-07T18:08:36.0274396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 8%] 2024-08-07T18:08:36.0275691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 8%] 2024-08-07T18:08:36.0277024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0100s] [ 8%] 2024-08-07T18:08:36.0278350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 8%] 2024-08-07T18:08:36.0279633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0099s] [ 8%] 2024-08-07T18:08:36.0280905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 8%] 2024-08-07T18:08:36.0282239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 8%] 2024-08-07T18:08:36.0283562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 8%] 2024-08-07T18:08:36.0284850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 8%] 2024-08-07T18:08:36.0286160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 8%] 2024-08-07T18:08:36.0287438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 8%] 2024-08-07T18:08:36.0288723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 8%] 2024-08-07T18:08:36.0289985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 8%] 2024-08-07T18:08:36.0291278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 8%] 2024-08-07T18:08:36.0292556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 8%] 2024-08-07T18:08:36.0293902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 8%] 2024-08-07T18:08:36.0295511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 8%] 2024-08-07T18:08:36.0296916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 8%] 2024-08-07T18:08:36.0298191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 8%] 2024-08-07T18:08:36.0299476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 8%] 2024-08-07T18:08:36.0300811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 8%] 2024-08-07T18:08:36.0302151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 8%] 2024-08-07T18:08:36.0303433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 8%] 2024-08-07T18:08:36.0304703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 8%] 2024-08-07T18:08:36.0306002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 8%] 2024-08-07T18:08:36.0307285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 8%] 2024-08-07T18:08:36.0308563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 8%] 2024-08-07T18:08:36.0309830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 8%] 2024-08-07T18:08:36.0311117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 8%] 2024-08-07T18:08:36.0312386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 8%] 2024-08-07T18:08:36.0313649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 8%] 2024-08-07T18:08:36.0314984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 8%] 2024-08-07T18:08:36.0316339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 8%] 2024-08-07T18:08:36.0317617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 8%] 2024-08-07T18:08:36.0318876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 8%] 2024-08-07T18:08:36.0320248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 8%] 2024-08-07T18:08:36.0321577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 8%] 2024-08-07T18:08:36.0322865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 8%] 2024-08-07T18:08:36.0324135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 8%] 2024-08-07T18:08:36.0325442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 8%] 2024-08-07T18:08:36.0326736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 8%] 2024-08-07T18:08:36.0328014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 8%] 2024-08-07T18:08:36.0329299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 8%] 2024-08-07T18:08:36.0330580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 8%] 2024-08-07T18:08:36.0331864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 8%] 2024-08-07T18:08:36.0333175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 8%] 2024-08-07T18:08:36.0334529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 8%] 2024-08-07T18:08:36.0335806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 8%] 2024-08-07T18:08:36.0337102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 8%] 2024-08-07T18:08:36.0338431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 8%] 2024-08-07T18:08:36.0339756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0079s] [ 8%] 2024-08-07T18:08:36.0341061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 8%] 2024-08-07T18:08:36.0342305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 8%] 2024-08-07T18:08:36.0343642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 8%] 2024-08-07T18:08:36.0344921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 8%] 2024-08-07T18:08:36.0346226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 8%] 2024-08-07T18:08:36.0347495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 8%] 2024-08-07T18:08:36.0348793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 8%] 2024-08-07T18:08:36.0350056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 8%] 2024-08-07T18:08:36.0351341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 8%] 2024-08-07T18:08:36.0352642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 8%] 2024-08-07T18:08:36.0353962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 8%] 2024-08-07T18:08:36.0355243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 8%] 2024-08-07T18:08:36.0356534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 8%] 2024-08-07T18:08:36.0357866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 8%] 2024-08-07T18:08:36.0359195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 8%] 2024-08-07T18:08:36.0360468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 8%] 2024-08-07T18:08:36.0361735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 8%] 2024-08-07T18:08:36.0363019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 8%] 2024-08-07T18:08:36.0364279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 8%] 2024-08-07T18:08:36.0365539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 8%] 2024-08-07T18:08:36.0366853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 8%] 2024-08-07T18:08:36.0368126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 8%] 2024-08-07T18:08:36.0369411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 8%] 2024-08-07T18:08:36.0370708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 8%] 2024-08-07T18:08:36.0372046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 8%] 2024-08-07T18:08:36.0373310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 9%] 2024-08-07T18:08:36.0374590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 9%] 2024-08-07T18:08:36.0375945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 9%] 2024-08-07T18:08:36.0377298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 9%] 2024-08-07T18:08:36.0378556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 9%] 2024-08-07T18:08:36.0379830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 9%] 2024-08-07T18:08:36.0381098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 9%] 2024-08-07T18:08:36.0382370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 9%] 2024-08-07T18:08:36.0383646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 9%] 2024-08-07T18:08:36.0384907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 9%] 2024-08-07T18:08:36.0386199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 9%] 2024-08-07T18:08:36.0387474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 9%] 2024-08-07T18:08:36.0388747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 9%] 2024-08-07T18:08:36.0390060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 9%] 2024-08-07T18:08:36.0391374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 9%] 2024-08-07T18:08:36.0392645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 9%] 2024-08-07T18:08:36.0393892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 9%] 2024-08-07T18:08:36.0395538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 9%] 2024-08-07T18:08:36.0396915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 9%] 2024-08-07T18:08:36.0398194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 9%] 2024-08-07T18:08:36.0399449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 9%] 2024-08-07T18:08:36.0400731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 9%] 2024-08-07T18:08:36.0401992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 9%] 2024-08-07T18:08:36.0403253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 9%] 2024-08-07T18:08:36.0404524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 9%] 2024-08-07T18:08:36.0405792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 9%] 2024-08-07T18:08:36.0407095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 9%] 2024-08-07T18:08:36.0408357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 9%] 2024-08-07T18:08:36.0409700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 9%] 2024-08-07T18:08:36.0411033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 9%] 2024-08-07T18:08:36.0412303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 9%] 2024-08-07T18:08:36.0413603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 9%] 2024-08-07T18:08:36.0414934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 9%] 2024-08-07T18:08:36.0416192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 9%] 2024-08-07T18:08:36.0417460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 9%] 2024-08-07T18:08:36.0418745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 9%] 2024-08-07T18:08:36.0420014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 9%] 2024-08-07T18:08:36.0421339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 9%] 2024-08-07T18:08:36.0422596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 9%] 2024-08-07T18:08:36.0423881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 9%] 2024-08-07T18:08:36.0425144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 9%] 2024-08-07T18:08:36.0426420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 9%] 2024-08-07T18:08:36.0427730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 9%] 2024-08-07T18:08:36.0429051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 9%] 2024-08-07T18:08:36.0430329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 9%] 2024-08-07T18:08:36.0431590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 9%] 2024-08-07T18:08:36.0432903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 9%] 2024-08-07T18:08:36.0434214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 9%] 2024-08-07T18:08:36.0435484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 9%] 2024-08-07T18:08:36.0436755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 9%] 2024-08-07T18:08:36.0438047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 9%] 2024-08-07T18:08:36.0439316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 9%] 2024-08-07T18:08:36.0440587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 9%] 2024-08-07T18:08:36.0441855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 9%] 2024-08-07T18:08:36.0443130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0091s] [ 9%] 2024-08-07T18:08:36.0444418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 9%] 2024-08-07T18:08:36.0445675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 9%] 2024-08-07T18:08:36.0447031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 9%] 2024-08-07T18:08:36.0448351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0100s] [ 9%] 2024-08-07T18:08:36.0449639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 9%] 2024-08-07T18:08:36.0450906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 9%] 2024-08-07T18:08:36.0452242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 9%] 2024-08-07T18:08:36.0453554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0092s] [ 9%] 2024-08-07T18:08:36.0454826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0096s] [ 9%] 2024-08-07T18:08:36.0456111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 9%] 2024-08-07T18:08:36.0457405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 9%] 2024-08-07T18:08:36.0458689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0108s] [ 9%] 2024-08-07T18:08:36.0459957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 9%] 2024-08-07T18:08:36.0461246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 9%] 2024-08-07T18:08:36.0462521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 9%] 2024-08-07T18:08:36.0463800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0111s] [ 9%] 2024-08-07T18:08:36.0465108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0111s] [ 9%] 2024-08-07T18:08:36.0466443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 9%] 2024-08-07T18:08:36.0467728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 9%] 2024-08-07T18:08:36.0468994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0121s] [ 9%] 2024-08-07T18:08:36.0470328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0122s] [ 9%] 2024-08-07T18:08:36.0471664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 9%] 2024-08-07T18:08:36.0472952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 10%] 2024-08-07T18:08:36.0474205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 10%] 2024-08-07T18:08:36.0475490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 10%] 2024-08-07T18:08:36.0476774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 10%] 2024-08-07T18:08:36.0478057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 10%] 2024-08-07T18:08:36.0479327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0094s] [ 10%] 2024-08-07T18:08:36.0480605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0095s] [ 10%] 2024-08-07T18:08:36.0481886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 10%] 2024-08-07T18:08:36.0483156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 10%] 2024-08-07T18:08:36.0484475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 10%] 2024-08-07T18:08:36.0485790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 10%] 2024-08-07T18:08:36.0487087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 10%] 2024-08-07T18:08:36.0488352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 10%] 2024-08-07T18:08:36.0489683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 10%] 2024-08-07T18:08:36.0491049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0492310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0493599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 10%] 2024-08-07T18:08:36.0494866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 10%] 2024-08-07T18:08:36.0496407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 10%] 2024-08-07T18:08:36.0497685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 10%] 2024-08-07T18:08:36.0498972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 10%] 2024-08-07T18:08:36.0500242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0501531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 10%] 2024-08-07T18:08:36.0502869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0504223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 10%] 2024-08-07T18:08:36.0505490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 10%] 2024-08-07T18:08:36.0506749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 10%] 2024-08-07T18:08:36.0508098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 10%] 2024-08-07T18:08:36.0509434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 10%] 2024-08-07T18:08:36.0510707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 10%] 2024-08-07T18:08:36.0511970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0513251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0514530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 10%] 2024-08-07T18:08:36.0515798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 10%] 2024-08-07T18:08:36.0517074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 10%] 2024-08-07T18:08:36.0518335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 10%] 2024-08-07T18:08:36.0519615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 10%] 2024-08-07T18:08:36.0520921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 10%] 2024-08-07T18:08:36.0522247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0523556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 10%] 2024-08-07T18:08:36.0524832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 10%] 2024-08-07T18:08:36.0526084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 10%] 2024-08-07T18:08:36.0527426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 10%] 2024-08-07T18:08:36.0528734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 10%] 2024-08-07T18:08:36.0530013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 10%] 2024-08-07T18:08:36.0531272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0532544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0533823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 10%] 2024-08-07T18:08:36.0535084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 10%] 2024-08-07T18:08:36.0536362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 10%] 2024-08-07T18:08:36.0537653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 10%] 2024-08-07T18:08:36.0538923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 10%] 2024-08-07T18:08:36.0540184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 10%] 2024-08-07T18:08:36.0541684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 10%] 2024-08-07T18:08:36.0543010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 10%] 2024-08-07T18:08:36.0544331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 10%] 2024-08-07T18:08:36.0545671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 10%] 2024-08-07T18:08:36.0546984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 10%] 2024-08-07T18:08:36.0548280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 10%] 2024-08-07T18:08:36.0549530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 10%] 2024-08-07T18:08:36.0550816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 10%] 2024-08-07T18:08:36.0552088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 10%] 2024-08-07T18:08:36.0553422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 10%] 2024-08-07T18:08:36.0554690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 10%] 2024-08-07T18:08:36.0555964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 10%] 2024-08-07T18:08:36.0557260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 10%] 2024-08-07T18:08:36.0558515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 10%] 2024-08-07T18:08:36.0559832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 10%] 2024-08-07T18:08:36.0561141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 10%] 2024-08-07T18:08:36.0562412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 10%] 2024-08-07T18:08:36.0563670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0564987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 10%] 2024-08-07T18:08:36.0566298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 10%] 2024-08-07T18:08:36.0567600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0143s] [ 10%] 2024-08-07T18:08:36.0568886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0152s] [ 10%] 2024-08-07T18:08:36.0570162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0088s] [ 10%] 2024-08-07T18:08:36.0571466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 10%] 2024-08-07T18:08:36.0572742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0217s] [ 10%] 2024-08-07T18:08:36.0574046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0221s] [ 11%] 2024-08-07T18:08:36.0575331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0100s] [ 11%] 2024-08-07T18:08:36.0576624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0095s] [ 11%] 2024-08-07T18:08:36.0577918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0144s] [ 11%] 2024-08-07T18:08:36.0579253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0156s] [ 11%] 2024-08-07T18:08:36.0580570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0092s] [ 11%] 2024-08-07T18:08:36.0581843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 11%] 2024-08-07T18:08:36.0583135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0221s] [ 11%] 2024-08-07T18:08:36.0584463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0224s] [ 11%] 2024-08-07T18:08:36.0585822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0100s] [ 11%] 2024-08-07T18:08:36.0587085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0099s] [ 11%] 2024-08-07T18:08:36.0588394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0162s] [ 11%] 2024-08-07T18:08:36.0589680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0180s] [ 11%] 2024-08-07T18:08:36.0590974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0099s] [ 11%] 2024-08-07T18:08:36.0592248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0098s] [ 11%] 2024-08-07T18:08:36.0593553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0234s] [ 11%] 2024-08-07T18:08:36.0594839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0236s] [ 11%] 2024-08-07T18:08:36.0596441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0106s] [ 11%] 2024-08-07T18:08:36.0597836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0103s] [ 11%] 2024-08-07T18:08:36.0599181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0137s] [ 11%] 2024-08-07T18:08:36.0600470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0144s] [ 11%] 2024-08-07T18:08:36.0601735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0086s] [ 11%] 2024-08-07T18:08:36.0603097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0096s] [ 11%] 2024-08-07T18:08:36.0604432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0216s] [ 11%] 2024-08-07T18:08:36.0605726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0220s] [ 11%] 2024-08-07T18:08:36.0607001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 11%] 2024-08-07T18:08:36.0608309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0097s] [ 11%] 2024-08-07T18:08:36.0609607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0246s] [ 11%] 2024-08-07T18:08:36.0610881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0270s] [ 11%] 2024-08-07T18:08:36.0612175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0111s] [ 11%] 2024-08-07T18:08:36.0613467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0111s] [ 11%] 2024-08-07T18:08:36.0614753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0390s] [ 11%] 2024-08-07T18:08:36.0616076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0401s] [ 11%] 2024-08-07T18:08:36.0617440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0125s] [ 11%] 2024-08-07T18:08:36.0618733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0121s] [ 11%] 2024-08-07T18:08:36.0620024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0253s] [ 11%] 2024-08-07T18:08:36.0621398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0286s] [ 11%] 2024-08-07T18:08:36.0622724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0117s] [ 11%] 2024-08-07T18:08:36.0624039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0115s] [ 11%] 2024-08-07T18:08:36.0625315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0397s] [ 11%] 2024-08-07T18:08:36.0626618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0406s] [ 11%] 2024-08-07T18:08:36.0627912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0127s] [ 11%] 2024-08-07T18:08:36.0629212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0128s] [ 11%] 2024-08-07T18:08:36.0630484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0277s] [ 11%] 2024-08-07T18:08:36.0631788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0318s] [ 11%] 2024-08-07T18:08:36.0633071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0129s] [ 11%] 2024-08-07T18:08:36.0634369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0126s] [ 11%] 2024-08-07T18:08:36.0635692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0417s] [ 11%] 2024-08-07T18:08:36.0637743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0422s] [ 11%] 2024-08-07T18:08:36.0639064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0137s] [ 11%] 2024-08-07T18:08:36.0640348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0134s] [ 11%] 2024-08-07T18:08:36.0641695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0235s] [ 11%] 2024-08-07T18:08:36.0643029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0256s] [ 11%] 2024-08-07T18:08:36.0644318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0109s] [ 11%] 2024-08-07T18:08:36.0645600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0109s] [ 11%] 2024-08-07T18:08:36.0646962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0381s] [ 11%] 2024-08-07T18:08:36.0648245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0392s] [ 11%] 2024-08-07T18:08:36.0649534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0124s] [ 11%] 2024-08-07T18:08:36.0650839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0119s] [ 11%] 2024-08-07T18:08:36.0652125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0096s] [ 11%] 2024-08-07T18:08:36.0653433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 11%] 2024-08-07T18:08:36.0654749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 11%] 2024-08-07T18:08:36.0656096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 11%] 2024-08-07T18:08:36.0657382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0142s] [ 11%] 2024-08-07T18:08:36.0658677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0142s] [ 11%] 2024-08-07T18:08:36.0659991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 11%] 2024-08-07T18:08:36.0661350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 11%] 2024-08-07T18:08:36.0662620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0099s] [ 11%] 2024-08-07T18:08:36.0663892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 11%] 2024-08-07T18:08:36.0665186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 11%] 2024-08-07T18:08:36.0666473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 11%] 2024-08-07T18:08:36.0667768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0145s] [ 11%] 2024-08-07T18:08:36.0669072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0146s] [ 11%] 2024-08-07T18:08:36.0670376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0090s] [ 11%] 2024-08-07T18:08:36.0671652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 11%] 2024-08-07T18:08:36.0672934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0108s] [ 11%] 2024-08-07T18:08:36.0674255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0110s] [ 11%] 2024-08-07T18:08:36.0675619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0084s] [ 12%] 2024-08-07T18:08:36.0676911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 12%] 2024-08-07T18:08:36.0678183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0155s] [ 12%] 2024-08-07T18:08:36.0679541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0160s] [ 12%] 2024-08-07T18:08:36.0680864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0090s] [ 12%] 2024-08-07T18:08:36.0682158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 12%] 2024-08-07T18:08:36.0683427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0094s] [ 12%] 2024-08-07T18:08:36.0684722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0093s] [ 12%] 2024-08-07T18:08:36.0685982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 12%] 2024-08-07T18:08:36.0687268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 12%] 2024-08-07T18:08:36.0688539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0140s] [ 12%] 2024-08-07T18:08:36.0689826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0142s] [ 12%] 2024-08-07T18:08:36.0691157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 12%] 2024-08-07T18:08:36.0692489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 12%] 2024-08-07T18:08:36.0693832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0091s] [ 12%] 2024-08-07T18:08:36.0695366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 12%] 2024-08-07T18:08:36.0696660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0077s] [ 12%] 2024-08-07T18:08:36.0698002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 12%] 2024-08-07T18:08:36.0699381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0128s] [ 12%] 2024-08-07T18:08:36.0700649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0125s] [ 12%] 2024-08-07T18:08:36.0701910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 12%] 2024-08-07T18:08:36.0703208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 12%] 2024-08-07T18:08:36.0704475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0094s] [ 12%] 2024-08-07T18:08:36.0705758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0093s] [ 12%] 2024-08-07T18:08:36.0707020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0082s] [ 12%] 2024-08-07T18:08:36.0708314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 12%] 2024-08-07T18:08:36.0709607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0132s] [ 12%] 2024-08-07T18:08:36.0710912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0133s] [ 12%] 2024-08-07T18:08:36.0712224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0088s] [ 12%] 2024-08-07T18:08:36.0713581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 12%] 2024-08-07T18:08:36.0714841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0108s] [ 12%] 2024-08-07T18:08:36.0716120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 12%] 2024-08-07T18:08:36.0717445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0085s] [ 12%] 2024-08-07T18:08:36.0718760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 12%] 2024-08-07T18:08:36.0720061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0143s] [ 12%] 2024-08-07T18:08:36.0721371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0141s] [ 12%] 2024-08-07T18:08:36.0722679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 12%] 2024-08-07T18:08:36.0723977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 12%] 2024-08-07T18:08:36.0725253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0091s] [ 12%] 2024-08-07T18:08:36.0726518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 12%] 2024-08-07T18:08:36.0727790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 12%] 2024-08-07T18:08:36.0729078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 12%] 2024-08-07T18:08:36.0730389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0127s] [ 12%] 2024-08-07T18:08:36.0731726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0125s] [ 12%] 2024-08-07T18:08:36.0732994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 12%] 2024-08-07T18:08:36.0734285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 12%] 2024-08-07T18:08:36.0735594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0461s] [ 12%] 2024-08-07T18:08:36.0736947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0522s] [ 12%] 2024-08-07T18:08:36.0738215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0156s] [ 12%] 2024-08-07T18:08:36.0739533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0150s] [ 12%] 2024-08-07T18:08:36.0740813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0733s] [ 12%] 2024-08-07T18:08:36.0742093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0770s] [ 12%] 2024-08-07T18:08:36.0743382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0174s] [ 12%] 2024-08-07T18:08:36.0744660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0180s] [ 12%] 2024-08-07T18:08:36.0745956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0475s] [ 12%] 2024-08-07T18:08:36.0747230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0544s] [ 12%] 2024-08-07T18:08:36.0748514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0166s] [ 12%] 2024-08-07T18:08:36.0749852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0164s] [ 12%] 2024-08-07T18:08:36.0751193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0745s] [ 12%] 2024-08-07T18:08:36.0752467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0773s] [ 12%] 2024-08-07T18:08:36.0753738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0180s] [ 12%] 2024-08-07T18:08:36.0755076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0178s] [ 12%] 2024-08-07T18:08:36.0756416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0525s] [ 12%] 2024-08-07T18:08:36.0757704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0601s] [ 12%] 2024-08-07T18:08:36.0758981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0186s] [ 12%] 2024-08-07T18:08:36.0760290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0180s] [ 12%] 2024-08-07T18:08:36.0761562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0788s] [ 12%] 2024-08-07T18:08:36.0762859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0810s] [ 12%] 2024-08-07T18:08:36.0764136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0207s] [ 12%] 2024-08-07T18:08:36.0765440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0205s] [ 12%] 2024-08-07T18:08:36.0766711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0448s] [ 12%] 2024-08-07T18:08:36.0768027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0491s] [ 12%] 2024-08-07T18:08:36.0769390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0153s] [ 12%] 2024-08-07T18:08:36.0770665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0154s] [ 12%] 2024-08-07T18:08:36.0771952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0726s] [ 12%] 2024-08-07T18:08:36.0773270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0762s] [ 12%] 2024-08-07T18:08:36.0774618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0175s] [ 12%] 2024-08-07T18:08:36.0775893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0175s] [ 12%] 2024-08-07T18:08:36.0777178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0096s] [ 13%] 2024-08-07T18:08:36.0778455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0099s] [ 13%] 2024-08-07T18:08:36.0779749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0074s] [ 13%] 2024-08-07T18:08:36.0781046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 13%] 2024-08-07T18:08:36.0782316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0135s] [ 13%] 2024-08-07T18:08:36.0783621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0135s] [ 13%] 2024-08-07T18:08:36.0784892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 13%] 2024-08-07T18:08:36.0786182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 13%] 2024-08-07T18:08:36.0787490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0098s] [ 13%] 2024-08-07T18:08:36.0788832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0103s] [ 13%] 2024-08-07T18:08:36.0790110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 13%] 2024-08-07T18:08:36.0791442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 13%] 2024-08-07T18:08:36.0792763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0141s] [ 13%] 2024-08-07T18:08:36.0794050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0143s] [ 13%] 2024-08-07T18:08:36.0795630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 13%] 2024-08-07T18:08:36.0796930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 13%] 2024-08-07T18:08:36.0798229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0102s] [ 13%] 2024-08-07T18:08:36.0799513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0110s] [ 13%] 2024-08-07T18:08:36.0800798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0077s] [ 13%] 2024-08-07T18:08:36.0802075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 13%] 2024-08-07T18:08:36.0803371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0149s] [ 13%] 2024-08-07T18:08:36.0804642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0147s] [ 13%] 2024-08-07T18:08:36.0805985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0093s] [ 13%] 2024-08-07T18:08:36.0807355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 13%] 2024-08-07T18:08:36.0808612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0097s] [ 13%] 2024-08-07T18:08:36.0809924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 13%] 2024-08-07T18:08:36.0811243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 13%] 2024-08-07T18:08:36.0812597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 13%] 2024-08-07T18:08:36.0813857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0140s] [ 13%] 2024-08-07T18:08:36.0815152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0138s] [ 13%] 2024-08-07T18:08:36.0816425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 13%] 2024-08-07T18:08:36.0817718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 13%] 2024-08-07T18:08:36.0818983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0088s] [ 13%] 2024-08-07T18:08:36.0820312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 13%] 2024-08-07T18:08:36.0821611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0074s] [ 13%] 2024-08-07T18:08:36.0822877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 13%] 2024-08-07T18:08:36.0824203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0130s] [ 13%] 2024-08-07T18:08:36.0825522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0133s] [ 13%] 2024-08-07T18:08:36.0826812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 13%] 2024-08-07T18:08:36.0828079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 13%] 2024-08-07T18:08:36.0829424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0095s] [ 13%] 2024-08-07T18:08:36.0830752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 13%] 2024-08-07T18:08:36.0832006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0077s] [ 13%] 2024-08-07T18:08:36.0833285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 13%] 2024-08-07T18:08:36.0834556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0137s] [ 13%] 2024-08-07T18:08:36.0835853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0134s] [ 13%] 2024-08-07T18:08:36.0837120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0088s] [ 13%] 2024-08-07T18:08:36.0838411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 13%] 2024-08-07T18:08:36.0839694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0104s] [ 13%] 2024-08-07T18:08:36.0840990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 13%] 2024-08-07T18:08:36.0842249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 13%] 2024-08-07T18:08:36.0843588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 13%] 2024-08-07T18:08:36.0844910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0148s] [ 13%] 2024-08-07T18:08:36.0846178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0148s] [ 13%] 2024-08-07T18:08:36.0847457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0091s] [ 13%] 2024-08-07T18:08:36.0848772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 13%] 2024-08-07T18:08:36.0850128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0086s] [ 13%] 2024-08-07T18:08:36.0851391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 13%] 2024-08-07T18:08:36.0852722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 13%] 2024-08-07T18:08:36.0854004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 13%] 2024-08-07T18:08:36.0855291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0129s] [ 13%] 2024-08-07T18:08:36.0856557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0126s] [ 13%] 2024-08-07T18:08:36.0857824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0085s] [ 13%] 2024-08-07T18:08:36.0859120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 13%] 2024-08-07T18:08:36.0860399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 13%] 2024-08-07T18:08:36.0861731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 13%] 2024-08-07T18:08:36.0863043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 13%] 2024-08-07T18:08:36.0864337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 13%] 2024-08-07T18:08:36.0865602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 13%] 2024-08-07T18:08:36.0866939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 13%] 2024-08-07T18:08:36.0868263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 13%] 2024-08-07T18:08:36.0869570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 13%] 2024-08-07T18:08:36.0870838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 13%] 2024-08-07T18:08:36.0872116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 13%] 2024-08-07T18:08:36.0873410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 13%] 2024-08-07T18:08:36.0874687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 13%] 2024-08-07T18:08:36.0875975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0084s] [ 13%] 2024-08-07T18:08:36.0877260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 14%] 2024-08-07T18:08:36.0878553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 14%] 2024-08-07T18:08:36.0879858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 14%] 2024-08-07T18:08:36.0881179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 14%] 2024-08-07T18:08:36.0882503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 14%] 2024-08-07T18:08:36.0883769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 14%] 2024-08-07T18:08:36.0885060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 14%] 2024-08-07T18:08:36.0886373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 14%] 2024-08-07T18:08:36.0887716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 14%] 2024-08-07T18:08:36.0888985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 14%] 2024-08-07T18:08:36.0890295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 14%] 2024-08-07T18:08:36.0891567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 14%] 2024-08-07T18:08:36.0892850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 14%] 2024-08-07T18:08:36.0894114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 14%] 2024-08-07T18:08:36.0895708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 14%] 2024-08-07T18:08:36.0896998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 14%] 2024-08-07T18:08:36.0898269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 14%] 2024-08-07T18:08:36.0899643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 14%] 2024-08-07T18:08:36.0901002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 14%] 2024-08-07T18:08:36.0902295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 14%] 2024-08-07T18:08:36.0903561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 14%] 2024-08-07T18:08:36.0904900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 14%] 2024-08-07T18:08:36.0906239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 14%] 2024-08-07T18:08:36.0907526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0098s] [ 14%] 2024-08-07T18:08:36.0908797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0095s] [ 14%] 2024-08-07T18:08:36.0910093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 14%] 2024-08-07T18:08:36.0911397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 14%] 2024-08-07T18:08:36.0912664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0074s] [ 14%] 2024-08-07T18:08:36.0913957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 14%] 2024-08-07T18:08:36.0915229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 14%] 2024-08-07T18:08:36.0916519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 14%] 2024-08-07T18:08:36.0917814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0098s] [ 14%] 2024-08-07T18:08:36.0919156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 14%] 2024-08-07T18:08:36.0920540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 14%] 2024-08-07T18:08:36.0921837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 14%] 2024-08-07T18:08:36.0923102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0085s] [ 14%] 2024-08-07T18:08:36.0924474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 14%] 2024-08-07T18:08:36.0925827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 14%] 2024-08-07T18:08:36.0927094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 14%] 2024-08-07T18:08:36.0928386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0104s] [ 14%] 2024-08-07T18:08:36.0929667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0103s] [ 14%] 2024-08-07T18:08:36.0930968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 14%] 2024-08-07T18:08:36.0932236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 14%] 2024-08-07T18:08:36.0933525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 14%] 2024-08-07T18:08:36.0934788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 14%] 2024-08-07T18:08:36.0936046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 14%] 2024-08-07T18:08:36.0937374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 14%] 2024-08-07T18:08:36.0938694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0098s] [ 14%] 2024-08-07T18:08:36.0940002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0099s] [ 14%] 2024-08-07T18:08:36.0941268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 14%] 2024-08-07T18:08:36.0942598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 14%] 2024-08-07T18:08:36.0943914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 14%] 2024-08-07T18:08:36.0945193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 14%] 2024-08-07T18:08:36.0946453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 14%] 2024-08-07T18:08:36.0947735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 14%] 2024-08-07T18:08:36.0949001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 14%] 2024-08-07T18:08:36.0950291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 14%] 2024-08-07T18:08:36.0951577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 14%] 2024-08-07T18:08:36.0952931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 14%] 2024-08-07T18:08:36.0954209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 14%] 2024-08-07T18:08:36.0955470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 14%] 2024-08-07T18:08:36.0956807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 14%] 2024-08-07T18:08:36.0958143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 14%] 2024-08-07T18:08:36.0959415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 14%] 2024-08-07T18:08:36.0960700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 14%] 2024-08-07T18:08:36.0962006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 14%] 2024-08-07T18:08:36.0963343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 14%] 2024-08-07T18:08:36.0964599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 14%] 2024-08-07T18:08:36.0965887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 14%] 2024-08-07T18:08:36.0967155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 14%] 2024-08-07T18:08:36.0968433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 14%] 2024-08-07T18:08:36.0969705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 14%] 2024-08-07T18:08:36.0970994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 14%] 2024-08-07T18:08:36.0972347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 14%] 2024-08-07T18:08:36.0973633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 14%] 2024-08-07T18:08:36.0974984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 14%] 2024-08-07T18:08:36.0976362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 14%] 2024-08-07T18:08:36.0977640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 15%] 2024-08-07T18:08:36.0978898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 15%] 2024-08-07T18:08:36.0980240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 15%] 2024-08-07T18:08:36.0981564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 15%] 2024-08-07T18:08:36.0982836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 15%] 2024-08-07T18:08:36.0984096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 15%] 2024-08-07T18:08:36.0985375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 15%] 2024-08-07T18:08:36.0986644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 15%] 2024-08-07T18:08:36.0987897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 15%] 2024-08-07T18:08:36.0989182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 15%] 2024-08-07T18:08:36.0990478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 15%] 2024-08-07T18:08:36.0991763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 15%] 2024-08-07T18:08:36.0993033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 15%] 2024-08-07T18:08:36.0994347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 15%] 2024-08-07T18:08:36.0995934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 15%] 2024-08-07T18:08:36.0997232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 15%] 2024-08-07T18:08:36.0998485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 15%] 2024-08-07T18:08:36.0999850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 15%] 2024-08-07T18:08:36.1001198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 15%] 2024-08-07T18:08:36.1002469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 15%] 2024-08-07T18:08:36.1003758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 15%] 2024-08-07T18:08:36.1005031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 15%] 2024-08-07T18:08:36.1006306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 15%] 2024-08-07T18:08:36.1007571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 15%] 2024-08-07T18:08:36.1008852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 15%] 2024-08-07T18:08:36.1010135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 15%] 2024-08-07T18:08:36.1011421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 15%] 2024-08-07T18:08:36.1012752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 15%] 2024-08-07T18:08:36.1014087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 15%] 2024-08-07T18:08:36.1015377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 15%] 2024-08-07T18:08:36.1016632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 15%] 2024-08-07T18:08:36.1017963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 15%] 2024-08-07T18:08:36.1020515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 15%] 2024-08-07T18:08:36.1022913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 15%] 2024-08-07T18:08:36.1025371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 15%] 2024-08-07T18:08:36.1027772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 15%] 2024-08-07T18:08:36.1030182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 15%] 2024-08-07T18:08:36.1032574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 15%] 2024-08-07T18:08:36.1034969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0113s] [ 15%] 2024-08-07T18:08:36.1037360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0124s] [ 15%] 2024-08-07T18:08:36.1040522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0077s] [ 15%] 2024-08-07T18:08:36.1043784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 15%] 2024-08-07T18:08:36.1046285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0154s] [ 15%] 2024-08-07T18:08:36.1048764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0157s] [ 15%] 2024-08-07T18:08:36.1051152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0088s] [ 15%] 2024-08-07T18:08:36.1053555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 15%] 2024-08-07T18:08:36.1056009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0118s] [ 15%] 2024-08-07T18:08:36.1058503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0126s] [ 15%] 2024-08-07T18:08:36.1061538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0082s] [ 15%] 2024-08-07T18:08:36.1064645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 15%] 2024-08-07T18:08:36.1067055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0155s] [ 15%] 2024-08-07T18:08:36.1069461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0162s] [ 15%] 2024-08-07T18:08:36.1071864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 15%] 2024-08-07T18:08:36.1074273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 15%] 2024-08-07T18:08:36.1076691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0131s] [ 15%] 2024-08-07T18:08:36.1079342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0137s] [ 15%] 2024-08-07T18:08:36.1082053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0087s] [ 15%] 2024-08-07T18:08:36.1085195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 15%] 2024-08-07T18:08:36.1087619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0167s] [ 15%] 2024-08-07T18:08:36.1090024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0169s] [ 15%] 2024-08-07T18:08:36.1092516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 15%] 2024-08-07T18:08:36.1094968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 15%] 2024-08-07T18:08:36.1097682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0113s] [ 15%] 2024-08-07T18:08:36.1100338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0115s] [ 15%] 2024-08-07T18:08:36.1102949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 15%] 2024-08-07T18:08:36.1105710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 15%] 2024-08-07T18:08:36.1108110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0154s] [ 15%] 2024-08-07T18:08:36.1110509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0155s] [ 15%] 2024-08-07T18:08:36.1112915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 15%] 2024-08-07T18:08:36.1115342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 15%] 2024-08-07T18:08:36.1117735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 15%] 2024-08-07T18:08:36.1120247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 15%] 2024-08-07T18:08:36.1122787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 15%] 2024-08-07T18:08:36.1125181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 15%] 2024-08-07T18:08:36.1127567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 15%] 2024-08-07T18:08:36.1130028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 15%] 2024-08-07T18:08:36.1132493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 15%] 2024-08-07T18:08:36.1134868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 15%] 2024-08-07T18:08:36.1137265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 16%] 2024-08-07T18:08:36.1139666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 16%] 2024-08-07T18:08:36.1142063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1144453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 16%] 2024-08-07T18:08:36.1146846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 16%] 2024-08-07T18:08:36.1149236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 16%] 2024-08-07T18:08:36.1151630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 16%] 2024-08-07T18:08:36.1154122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 16%] 2024-08-07T18:08:36.1156577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 16%] 2024-08-07T18:08:36.1158987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 16%] 2024-08-07T18:08:36.1161364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1163804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 16%] 2024-08-07T18:08:36.1166251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 16%] 2024-08-07T18:08:36.1168652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1171059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1173464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1175841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 16%] 2024-08-07T18:08:36.1178266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 16%] 2024-08-07T18:08:36.1180651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1183078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 16%] 2024-08-07T18:08:36.1185499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1187880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1190308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 16%] 2024-08-07T18:08:36.1192784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 16%] 2024-08-07T18:08:36.1195507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 16%] 2024-08-07T18:08:36.1197942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 16%] 2024-08-07T18:08:36.1200412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 16%] 2024-08-07T18:08:36.1202867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1205259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1207657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 16%] 2024-08-07T18:08:36.1210064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 16%] 2024-08-07T18:08:36.1212456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 16%] 2024-08-07T18:08:36.1214821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1217221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 16%] 2024-08-07T18:08:36.1219625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 16%] 2024-08-07T18:08:36.1222056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 16%] 2024-08-07T18:08:36.1224507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 16%] 2024-08-07T18:08:36.1226955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 16%] 2024-08-07T18:08:36.1229352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 16%] 2024-08-07T18:08:36.1231736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 16%] 2024-08-07T18:08:36.1234182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 16%] 2024-08-07T18:08:36.1236629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 16%] 2024-08-07T18:08:36.1239006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 16%] 2024-08-07T18:08:36.1241391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1243781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1246184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 16%] 2024-08-07T18:08:36.1248594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1250988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 16%] 2024-08-07T18:08:36.1253383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 16%] 2024-08-07T18:08:36.1255788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1258253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 16%] 2024-08-07T18:08:36.1260659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 16%] 2024-08-07T18:08:36.1263110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 16%] 2024-08-07T18:08:36.1265484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 16%] 2024-08-07T18:08:36.1267849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 16%] 2024-08-07T18:08:36.1270286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 16%] 2024-08-07T18:08:36.1272754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 16%] 2024-08-07T18:08:36.1275164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 16%] 2024-08-07T18:08:36.1277555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1279940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 16%] 2024-08-07T18:08:36.1282348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 16%] 2024-08-07T18:08:36.1284749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1287218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1289634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 16%] 2024-08-07T18:08:36.1292041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1294419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 16%] 2024-08-07T18:08:36.1297216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 16%] 2024-08-07T18:08:36.1299695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1302087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 16%] 2024-08-07T18:08:36.1304575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 16%] 2024-08-07T18:08:36.1307026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 16%] 2024-08-07T18:08:36.1309437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 16%] 2024-08-07T18:08:36.1311867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 16%] 2024-08-07T18:08:36.1314269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 16%] 2024-08-07T18:08:36.1316676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 16%] 2024-08-07T18:08:36.1319078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 16%] 2024-08-07T18:08:36.1321517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 16%] 2024-08-07T18:08:36.1323934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 17%] 2024-08-07T18:08:36.1326351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 17%] 2024-08-07T18:08:36.1328798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 17%] 2024-08-07T18:08:36.1331222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 17%] 2024-08-07T18:08:36.1333661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 17%] 2024-08-07T18:08:36.1336040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 17%] 2024-08-07T18:08:36.1338425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 17%] 2024-08-07T18:08:36.1340869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 17%] 2024-08-07T18:08:36.1343327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 17%] 2024-08-07T18:08:36.1345700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 17%] 2024-08-07T18:08:36.1348117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 17%] 2024-08-07T18:08:36.1350512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 17%] 2024-08-07T18:08:36.1352911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 17%] 2024-08-07T18:08:36.1355302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 17%] 2024-08-07T18:08:36.1357702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 17%] 2024-08-07T18:08:36.1360097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 17%] 2024-08-07T18:08:36.1362493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 17%] 2024-08-07T18:08:36.1364886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 17%] 2024-08-07T18:08:36.1367338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 17%] 2024-08-07T18:08:36.1369792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 17%] 2024-08-07T18:08:36.1372169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 17%] 2024-08-07T18:08:36.1374625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 17%] 2024-08-07T18:08:36.1377077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 17%] 2024-08-07T18:08:36.1379467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 17%] 2024-08-07T18:08:36.1381888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 17%] 2024-08-07T18:08:36.1384310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 17%] 2024-08-07T18:08:36.1386713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 17%] 2024-08-07T18:08:36.1389128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0082s] [ 17%] 2024-08-07T18:08:36.1391526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 17%] 2024-08-07T18:08:36.1393936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 17%] 2024-08-07T18:08:36.1396679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 17%] 2024-08-07T18:08:36.1399059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 17%] 2024-08-07T18:08:36.1401548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 17%] 2024-08-07T18:08:36.1404058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 17%] 2024-08-07T18:08:36.1406449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 17%] 2024-08-07T18:08:36.1408840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 17%] 2024-08-07T18:08:36.1411309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 17%] 2024-08-07T18:08:36.1413735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 17%] 2024-08-07T18:08:36.1416122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 17%] 2024-08-07T18:08:36.1418520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 17%] 2024-08-07T18:08:36.1420965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 17%] 2024-08-07T18:08:36.1423398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 17%] 2024-08-07T18:08:36.1425776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 17%] 2024-08-07T18:08:36.1428178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 17%] 2024-08-07T18:08:36.1430586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 17%] 2024-08-07T18:08:36.1432963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 17%] 2024-08-07T18:08:36.1435354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 17%] 2024-08-07T18:08:36.1437796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 17%] 2024-08-07T18:08:36.1440237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 17%] 2024-08-07T18:08:36.1442625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 17%] 2024-08-07T18:08:36.1445036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 17%] 2024-08-07T18:08:36.1447482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 17%] 2024-08-07T18:08:36.1449923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 17%] 2024-08-07T18:08:36.1452327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 17%] 2024-08-07T18:08:36.1454790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 17%] 2024-08-07T18:08:36.1457267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 17%] 2024-08-07T18:08:36.1459702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 17%] 2024-08-07T18:08:36.1462073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 17%] 2024-08-07T18:08:36.1464474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 17%] 2024-08-07T18:08:36.1466894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 17%] 2024-08-07T18:08:36.1469276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 17%] 2024-08-07T18:08:36.1471730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 17%] 2024-08-07T18:08:36.1474173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 17%] 2024-08-07T18:08:36.1476534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 17%] 2024-08-07T18:08:36.1478955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 17%] 2024-08-07T18:08:36.1481390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 17%] 2024-08-07T18:08:36.1483881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 17%] 2024-08-07T18:08:36.1486254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 17%] 2024-08-07T18:08:36.1488614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 17%] 2024-08-07T18:08:36.1490999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 17%] 2024-08-07T18:08:36.1493379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 17%] 2024-08-07T18:08:36.1496037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 17%] 2024-08-07T18:08:36.1498459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 17%] 2024-08-07T18:08:36.1500854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 17%] 2024-08-07T18:08:36.1503211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 17%] 2024-08-07T18:08:36.1505586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 17%] 2024-08-07T18:08:36.1508047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 17%] 2024-08-07T18:08:36.1510500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 17%] 2024-08-07T18:08:36.1512875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 18%] 2024-08-07T18:08:36.1515256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 18%] 2024-08-07T18:08:36.1517716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 18%] 2024-08-07T18:08:36.1520165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 18%] 2024-08-07T18:08:36.1522658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 18%] 2024-08-07T18:08:36.1525040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 18%] 2024-08-07T18:08:36.1527430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 18%] 2024-08-07T18:08:36.1529802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 18%] 2024-08-07T18:08:36.1532204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 18%] 2024-08-07T18:08:36.1534599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 18%] 2024-08-07T18:08:36.1536998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 18%] 2024-08-07T18:08:36.1539376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 18%] 2024-08-07T18:08:36.1541789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 18%] 2024-08-07T18:08:36.1544218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 18%] 2024-08-07T18:08:36.1546604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 18%] 2024-08-07T18:08:36.1548980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 18%] 2024-08-07T18:08:36.1551406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 18%] 2024-08-07T18:08:36.1553844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 18%] 2024-08-07T18:08:36.1556231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 18%] 2024-08-07T18:08:36.1558615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 18%] 2024-08-07T18:08:36.1561006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 18%] 2024-08-07T18:08:36.1563398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 18%] 2024-08-07T18:08:36.1565768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 18%] 2024-08-07T18:08:36.1568124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 18%] 2024-08-07T18:08:36.1570525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 18%] 2024-08-07T18:08:36.1572924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 18%] 2024-08-07T18:08:36.1575302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 18%] 2024-08-07T18:08:36.1577757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 18%] 2024-08-07T18:08:36.1580173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 18%] 2024-08-07T18:08:36.1582557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0094s] [ 18%] 2024-08-07T18:08:36.1584960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0094s] [ 18%] 2024-08-07T18:08:36.1587394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 18%] 2024-08-07T18:08:36.1589860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 18%] 2024-08-07T18:08:36.1592271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0104s] [ 18%] 2024-08-07T18:08:36.1594654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0104s] [ 18%] 2024-08-07T18:08:36.1597324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 18%] 2024-08-07T18:08:36.1599741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 18%] 2024-08-07T18:08:36.1602127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0101s] [ 18%] 2024-08-07T18:08:36.1604530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0104s] [ 18%] 2024-08-07T18:08:36.1606914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 18%] 2024-08-07T18:08:36.1609333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 18%] 2024-08-07T18:08:36.1611733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0110s] [ 18%] 2024-08-07T18:08:36.1614211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0113s] [ 18%] 2024-08-07T18:08:36.1616688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 18%] 2024-08-07T18:08:36.1619095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 18%] 2024-08-07T18:08:36.1621572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0119s] [ 18%] 2024-08-07T18:08:36.1624035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0123s] [ 18%] 2024-08-07T18:08:36.1626428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 18%] 2024-08-07T18:08:36.1628843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 18%] 2024-08-07T18:08:36.1631241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0129s] [ 18%] 2024-08-07T18:08:36.1633643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0129s] [ 18%] 2024-08-07T18:08:36.1636039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 18%] 2024-08-07T18:08:36.1638438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 18%] 2024-08-07T18:08:36.1640858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 18%] 2024-08-07T18:08:36.1643252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 18%] 2024-08-07T18:08:36.1645613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 18%] 2024-08-07T18:08:36.1648094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 18%] 2024-08-07T18:08:36.1650548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0101s] [ 18%] 2024-08-07T18:08:36.1652936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0100s] [ 18%] 2024-08-07T18:08:36.1655347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 18%] 2024-08-07T18:08:36.1657782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 18%] 2024-08-07T18:08:36.1660208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 18%] 2024-08-07T18:08:36.1662593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 18%] 2024-08-07T18:08:36.1664978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 18%] 2024-08-07T18:08:36.1667388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 18%] 2024-08-07T18:08:36.1669782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 18%] 2024-08-07T18:08:36.1672159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 18%] 2024-08-07T18:08:36.1674555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 18%] 2024-08-07T18:08:36.1676956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 18%] 2024-08-07T18:08:36.1679342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 18%] 2024-08-07T18:08:36.1681726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 18%] 2024-08-07T18:08:36.1684162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 18%] 2024-08-07T18:08:36.1686582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 18%] 2024-08-07T18:08:36.1688970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 18%] 2024-08-07T18:08:36.1691405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 18%] 2024-08-07T18:08:36.1693852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 18%] 2024-08-07T18:08:36.1696507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 18%] 2024-08-07T18:08:36.1698878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 19%] 2024-08-07T18:08:36.1701267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 19%] 2024-08-07T18:08:36.1703708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 19%] 2024-08-07T18:08:36.1706093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 19%] 2024-08-07T18:08:36.1708480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 19%] 2024-08-07T18:08:36.1710899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 19%] 2024-08-07T18:08:36.1713273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 19%] 2024-08-07T18:08:36.1715661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 19%] 2024-08-07T18:08:36.1718146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 19%] 2024-08-07T18:08:36.1720637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 19%] 2024-08-07T18:08:36.1723040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 19%] 2024-08-07T18:08:36.1725402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 19%] 2024-08-07T18:08:36.1727855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 19%] 2024-08-07T18:08:36.1730313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 19%] 2024-08-07T18:08:36.1732690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 19%] 2024-08-07T18:08:36.1735081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 19%] 2024-08-07T18:08:36.1737453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 19%] 2024-08-07T18:08:36.1739839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 19%] 2024-08-07T18:08:36.1742228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 19%] 2024-08-07T18:08:36.1744615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 19%] 2024-08-07T18:08:36.1747009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 19%] 2024-08-07T18:08:36.1749389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 19%] 2024-08-07T18:08:36.1751751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 19%] 2024-08-07T18:08:36.1754235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 19%] 2024-08-07T18:08:36.1756683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 19%] 2024-08-07T18:08:36.1759068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 19%] 2024-08-07T18:08:36.1761453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 19%] 2024-08-07T18:08:36.1763859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 19%] 2024-08-07T18:08:36.1766319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 19%] 2024-08-07T18:08:36.1768699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 19%] 2024-08-07T18:08:36.1771080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 19%] 2024-08-07T18:08:36.1773497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 19%] 2024-08-07T18:08:36.1775872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 19%] 2024-08-07T18:08:36.1778270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 19%] 2024-08-07T18:08:36.1780729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 19%] 2024-08-07T18:08:36.1783121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 19%] 2024-08-07T18:08:36.1785505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 19%] 2024-08-07T18:08:36.1787955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 19%] 2024-08-07T18:08:36.1790355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 19%] 2024-08-07T18:08:36.1792752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 19%] 2024-08-07T18:08:36.1795369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 19%] 2024-08-07T18:08:36.1797868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 19%] 2024-08-07T18:08:36.1800316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 19%] 2024-08-07T18:08:36.1802686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 19%] 2024-08-07T18:08:36.1805037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 19%] 2024-08-07T18:08:36.1807458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 19%] 2024-08-07T18:08:36.1809844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 19%] 2024-08-07T18:08:36.1812226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 19%] 2024-08-07T18:08:36.1814625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0081s] [ 19%] 2024-08-07T18:08:36.1817053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 19%] 2024-08-07T18:08:36.1819475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0080s] [ 19%] 2024-08-07T18:08:36.1821943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 19%] 2024-08-07T18:08:36.1824429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0108s] [ 19%] 2024-08-07T18:08:36.1826917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 19%] 2024-08-07T18:08:36.1829335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0091s] [ 19%] 2024-08-07T18:08:36.1831745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0102s] [ 19%] 2024-08-07T18:08:36.1834204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0095s] [ 19%] 2024-08-07T18:08:36.1836713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 19%] 2024-08-07T18:08:36.1839123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0095s] [ 19%] 2024-08-07T18:08:36.1841547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0097s] [ 19%] 2024-08-07T18:08:36.1843966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0125s] [ 19%] 2024-08-07T18:08:36.1846378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0117s] [ 19%] 2024-08-07T18:08:36.1848794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0098s] [ 19%] 2024-08-07T18:08:36.1851225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0100s] [ 19%] 2024-08-07T18:08:36.1853665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0106s] [ 19%] 2024-08-07T18:08:36.1856054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 19%] 2024-08-07T18:08:36.1858506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0116s] [ 19%] 2024-08-07T18:08:36.1860987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0113s] [ 19%] 2024-08-07T18:08:36.1863414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0127s] [ 19%] 2024-08-07T18:08:36.1865830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0133s] [ 19%] 2024-08-07T18:08:36.1868293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0107s] [ 19%] 2024-08-07T18:08:36.1870765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0110s] [ 19%] 2024-08-07T18:08:36.1873179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0079s] [ 19%] 2024-08-07T18:08:36.1875591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 19%] 2024-08-07T18:08:36.1878002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0076s] [ 19%] 2024-08-07T18:08:36.1880416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 19%] 2024-08-07T18:08:36.1882812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0106s] [ 19%] 2024-08-07T18:08:36.1885229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 20%] 2024-08-07T18:08:36.1887657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 20%] 2024-08-07T18:08:36.1890083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 20%] 2024-08-07T18:08:36.1892510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0134s] [ 20%] 2024-08-07T18:08:36.1894980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0139s] [ 20%] 2024-08-07T18:08:36.1897755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0126s] [ 20%] 2024-08-07T18:08:36.1900172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0129s] [ 20%] 2024-08-07T18:08:36.1902586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0183s] [ 20%] 2024-08-07T18:08:36.1905077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0188s] [ 20%] 2024-08-07T18:08:36.1907566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0142s] [ 20%] 2024-08-07T18:08:36.1909977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0143s] [ 20%] 2024-08-07T18:08:36.1912392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0137s] [ 20%] 2024-08-07T18:08:36.1914816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0147s] [ 20%] 2024-08-07T18:08:36.1917226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0141s] [ 20%] 2024-08-07T18:08:36.1919635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0144s] [ 20%] 2024-08-07T18:08:36.1922105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0188s] [ 20%] 2024-08-07T18:08:36.1924507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0192s] [ 20%] 2024-08-07T18:08:36.1926922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0158s] [ 20%] 2024-08-07T18:08:36.1929450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0156s] [ 20%] 2024-08-07T18:08:36.1931925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0153s] [ 20%] 2024-08-07T18:08:36.1934342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0166s] [ 20%] 2024-08-07T18:08:36.1936744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0167s] [ 20%] 2024-08-07T18:08:36.1939200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0167s] [ 20%] 2024-08-07T18:08:36.1941707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0207s] [ 20%] 2024-08-07T18:08:36.1944145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0209s] [ 20%] 2024-08-07T18:08:36.1946573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0183s] [ 20%] 2024-08-07T18:08:36.1949004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0184s] [ 20%] 2024-08-07T18:08:36.1951417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0124s] [ 20%] 2024-08-07T18:08:36.1953819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0128s] [ 20%] 2024-08-07T18:08:36.1956232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0120s] [ 20%] 2024-08-07T18:08:36.1958652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0121s] [ 20%] 2024-08-07T18:08:36.1961063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0176s] [ 20%] 2024-08-07T18:08:36.1963447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0181s] [ 20%] 2024-08-07T18:08:36.1965923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0136s] [ 20%] 2024-08-07T18:08:36.1968396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0138s] [ 20%] 2024-08-07T18:08:36.1970802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 20%] 2024-08-07T18:08:36.1973248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 20%] 2024-08-07T18:08:36.1975708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 20%] 2024-08-07T18:08:36.1978089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 20%] 2024-08-07T18:08:36.1980492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0084s] [ 20%] 2024-08-07T18:08:36.1982900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 20%] 2024-08-07T18:08:36.1985344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 20%] 2024-08-07T18:08:36.1987760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 20%] 2024-08-07T18:08:36.1990192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 20%] 2024-08-07T18:08:36.1992598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 20%] 2024-08-07T18:08:36.1995300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0071s] [ 20%] 2024-08-07T18:08:36.1997736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 20%] 2024-08-07T18:08:36.2000246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 20%] 2024-08-07T18:08:36.2002741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 20%] 2024-08-07T18:08:36.2005148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 20%] 2024-08-07T18:08:36.2007574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 20%] 2024-08-07T18:08:36.2010091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 20%] 2024-08-07T18:08:36.2012575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 20%] 2024-08-07T18:08:36.2014998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0075s] [ 20%] 2024-08-07T18:08:36.2017382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 20%] 2024-08-07T18:08:36.2019806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0088s] [ 20%] 2024-08-07T18:08:36.2022292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 20%] 2024-08-07T18:08:36.2024699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 20%] 2024-08-07T18:08:36.2027109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 20%] 2024-08-07T18:08:36.2029550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 20%] 2024-08-07T18:08:36.2031931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 20%] 2024-08-07T18:08:36.2034379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 20%] 2024-08-07T18:08:36.2036839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 20%] 2024-08-07T18:08:36.2039302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0080s] [ 20%] 2024-08-07T18:08:36.2041713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 20%] 2024-08-07T18:08:36.2044195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 20%] 2024-08-07T18:08:36.2046652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 20%] 2024-08-07T18:08:36.2049050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 20%] 2024-08-07T18:08:36.2051445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 20%] 2024-08-07T18:08:36.2053895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 20%] 2024-08-07T18:08:36.2056309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 20%] 2024-08-07T18:08:36.2058690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0080s] [ 20%] 2024-08-07T18:08:36.2061106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 20%] 2024-08-07T18:08:36.2063514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 20%] 2024-08-07T18:08:36.2065920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 20%] 2024-08-07T18:08:36.2068315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 20%] 2024-08-07T18:08:36.2070738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 20%] 2024-08-07T18:08:36.2073190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 20%] 2024-08-07T18:08:36.2075581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 21%] 2024-08-07T18:08:36.2077983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 21%] 2024-08-07T18:08:36.2080441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 21%] 2024-08-07T18:08:36.2082951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 21%] 2024-08-07T18:08:36.2085341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 21%] 2024-08-07T18:08:36.2087744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 21%] 2024-08-07T18:08:36.2090154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 21%] 2024-08-07T18:08:36.2092567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0073s] [ 21%] 2024-08-07T18:08:36.2094965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 21%] 2024-08-07T18:08:36.2097644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0084s] [ 21%] 2024-08-07T18:08:36.2100094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 21%] 2024-08-07T18:08:36.2102519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0081s] [ 21%] 2024-08-07T18:08:36.2104992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 21%] 2024-08-07T18:08:36.2107480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 21%] 2024-08-07T18:08:36.2109883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 21%] 2024-08-07T18:08:36.2112258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 21%] 2024-08-07T18:08:36.2114716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 21%] 2024-08-07T18:08:36.2117228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 21%] 2024-08-07T18:08:36.2119636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 21%] 2024-08-07T18:08:36.2122083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 21%] 2024-08-07T18:08:36.2124472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 21%] 2024-08-07T18:08:36.2126897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0207s] [ 21%] 2024-08-07T18:08:36.2129304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0225s] [ 21%] 2024-08-07T18:08:36.2131717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0187s] [ 21%] 2024-08-07T18:08:36.2134132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0185s] [ 21%] 2024-08-07T18:08:36.2136565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0305s] [ 21%] 2024-08-07T18:08:36.2138958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0314s] [ 21%] 2024-08-07T18:08:36.2141422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0214s] [ 21%] 2024-08-07T18:08:36.2143892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0214s] [ 21%] 2024-08-07T18:08:36.2146303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0218s] [ 21%] 2024-08-07T18:08:36.2148708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0238s] [ 21%] 2024-08-07T18:08:36.2151147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0210s] [ 21%] 2024-08-07T18:08:36.2153641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0207s] [ 21%] 2024-08-07T18:08:36.2156040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0312s] [ 21%] 2024-08-07T18:08:36.2158460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0322s] [ 21%] 2024-08-07T18:08:36.2160880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0240s] [ 21%] 2024-08-07T18:08:36.2163293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0242s] [ 21%] 2024-08-07T18:08:36.2165680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0250s] [ 21%] 2024-08-07T18:08:36.2176456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0268s] [ 21%] 2024-08-07T18:08:36.2178965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0265s] [ 21%] 2024-08-07T18:08:36.2181413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0273s] [ 21%] 2024-08-07T18:08:36.2183925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0357s] [ 21%] 2024-08-07T18:08:36.2186450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0368s] [ 21%] 2024-08-07T18:08:36.2188880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0310s] [ 21%] 2024-08-07T18:08:36.2191299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0307s] [ 21%] 2024-08-07T18:08:36.2193776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0208s] [ 21%] 2024-08-07T18:08:36.2196541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0220s] [ 21%] 2024-08-07T18:08:36.2198953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0194s] [ 21%] 2024-08-07T18:08:36.2201399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0193s] [ 21%] 2024-08-07T18:08:36.2203818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0310s] [ 21%] 2024-08-07T18:08:36.2206247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0314s] [ 21%] 2024-08-07T18:08:36.2208658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0225s] [ 21%] 2024-08-07T18:08:36.2211065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0225s] [ 21%] 2024-08-07T18:08:36.2213485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 21%] 2024-08-07T18:08:36.2215884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 21%] 2024-08-07T18:08:36.2218283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 21%] 2024-08-07T18:08:36.2220837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 21%] 2024-08-07T18:08:36.2223316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0080s] [ 21%] 2024-08-07T18:08:36.2225728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 21%] 2024-08-07T18:08:36.2228137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 21%] 2024-08-07T18:08:36.2230610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 21%] 2024-08-07T18:08:36.2233141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0070s] [ 21%] 2024-08-07T18:08:36.2235523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 21%] 2024-08-07T18:08:36.2237947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 21%] 2024-08-07T18:08:36.2240357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 21%] 2024-08-07T18:08:36.2242757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0084s] [ 21%] 2024-08-07T18:08:36.2245161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 21%] 2024-08-07T18:08:36.2247565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 21%] 2024-08-07T18:08:36.2248873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 21%] 2024-08-07T18:08:36.2250114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 21%] 2024-08-07T18:08:36.2251447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 21%] 2024-08-07T18:08:36.2252793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 21%] 2024-08-07T18:08:36.2254081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 21%] 2024-08-07T18:08:36.2255348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0091s] [ 21%] 2024-08-07T18:08:36.2256703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 21%] 2024-08-07T18:08:36.2258032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 21%] 2024-08-07T18:08:36.2259318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 21%] 2024-08-07T18:08:36.2260577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 22%] 2024-08-07T18:08:36.2261868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 22%] 2024-08-07T18:08:36.2263148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 22%] 2024-08-07T18:08:36.2264416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 22%] 2024-08-07T18:08:36.2265715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0079s] [ 22%] 2024-08-07T18:08:36.2266989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 22%] 2024-08-07T18:08:36.2268269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 22%] 2024-08-07T18:08:36.2269539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 22%] 2024-08-07T18:08:36.2270865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 22%] 2024-08-07T18:08:36.2272196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 22%] 2024-08-07T18:08:36.2273473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 22%] 2024-08-07T18:08:36.2274741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 22%] 2024-08-07T18:08:36.2276051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 22%] 2024-08-07T18:08:36.2277391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 22%] 2024-08-07T18:08:36.2278650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 22%] 2024-08-07T18:08:36.2279943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 22%] 2024-08-07T18:08:36.2281213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 22%] 2024-08-07T18:08:36.2282523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 22%] 2024-08-07T18:08:36.2283782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 22%] 2024-08-07T18:08:36.2285067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 22%] 2024-08-07T18:08:36.2286342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 22%] 2024-08-07T18:08:36.2287631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 22%] 2024-08-07T18:08:36.2288944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 22%] 2024-08-07T18:08:36.2290287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 22%] 2024-08-07T18:08:36.2291566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 22%] 2024-08-07T18:08:36.2292852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 22%] 2024-08-07T18:08:36.2294191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 22%] 2024-08-07T18:08:36.2295759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 22%] 2024-08-07T18:08:36.2297048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0086s] [ 22%] 2024-08-07T18:08:36.2298336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 22%] 2024-08-07T18:08:36.2299617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 22%] 2024-08-07T18:08:36.2300898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 22%] 2024-08-07T18:08:36.2302158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 22%] 2024-08-07T18:08:36.2303451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 22%] 2024-08-07T18:08:36.2304724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 22%] 2024-08-07T18:08:36.2306005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 22%] 2024-08-07T18:08:36.2307266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 22%] 2024-08-07T18:08:36.2308641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 22%] 2024-08-07T18:08:36.2309981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 22%] 2024-08-07T18:08:36.2311264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 22%] 2024-08-07T18:08:36.2312549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0098s] [ 22%] 2024-08-07T18:08:36.2313906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0099s] [ 22%] 2024-08-07T18:08:36.2315240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0104s] [ 22%] 2024-08-07T18:08:36.2316513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0103s] [ 22%] 2024-08-07T18:08:36.2317808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0126s] [ 22%] 2024-08-07T18:08:36.2319099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0127s] [ 22%] 2024-08-07T18:08:36.2320392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0114s] [ 22%] 2024-08-07T18:08:36.2321713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0114s] [ 22%] 2024-08-07T18:08:36.2323042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0107s] [ 22%] 2024-08-07T18:08:36.2324327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 22%] 2024-08-07T18:08:36.2325612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0113s] [ 22%] 2024-08-07T18:08:36.2326936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0114s] [ 22%] 2024-08-07T18:08:36.2328280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0133s] [ 22%] 2024-08-07T18:08:36.2329575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0135s] [ 22%] 2024-08-07T18:08:36.2330844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0125s] [ 22%] 2024-08-07T18:08:36.2332187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0124s] [ 22%] 2024-08-07T18:08:36.2333531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0115s] [ 22%] 2024-08-07T18:08:36.2334822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0124s] [ 22%] 2024-08-07T18:08:36.2336092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0131s] [ 22%] 2024-08-07T18:08:36.2337389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0134s] [ 22%] 2024-08-07T18:08:36.2338669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0148s] [ 22%] 2024-08-07T18:08:36.2339964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0149s] [ 22%] 2024-08-07T18:08:36.2341244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0138s] [ 22%] 2024-08-07T18:08:36.2342546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0147s] [ 22%] 2024-08-07T18:08:36.2343835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0093s] [ 22%] 2024-08-07T18:08:36.2345107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0094s] [ 22%] 2024-08-07T18:08:36.2346434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0099s] [ 22%] 2024-08-07T18:08:36.2347763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0100s] [ 22%] 2024-08-07T18:08:36.2349051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0124s] [ 22%] 2024-08-07T18:08:36.2350372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0125s] [ 22%] 2024-08-07T18:08:36.2351714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0107s] [ 22%] 2024-08-07T18:08:36.2353069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0107s] [ 22%] 2024-08-07T18:08:36.2354344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0151s] [ 22%] 2024-08-07T18:08:36.2355649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0159s] [ 22%] 2024-08-07T18:08:36.2356931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0160s] [ 22%] 2024-08-07T18:08:36.2358223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0155s] [ 22%] 2024-08-07T18:08:36.2359499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0220s] [ 22%] 2024-08-07T18:08:36.2360802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0225s] [ 22%] 2024-08-07T18:08:36.2362080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0178s] [ 23%] 2024-08-07T18:08:36.2363392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0175s] [ 23%] 2024-08-07T18:08:36.2364704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0163s] [ 23%] 2024-08-07T18:08:36.2366065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0176s] [ 23%] 2024-08-07T18:08:36.2367328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0178s] [ 23%] 2024-08-07T18:08:36.2368600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0177s] [ 23%] 2024-08-07T18:08:36.2369935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0221s] [ 23%] 2024-08-07T18:08:36.2371268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0226s] [ 23%] 2024-08-07T18:08:36.2372567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0192s] [ 23%] 2024-08-07T18:08:36.2373850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0193s] [ 23%] 2024-08-07T18:08:36.2375145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0182s] [ 23%] 2024-08-07T18:08:36.2376420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0208s] [ 23%] 2024-08-07T18:08:36.2377702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0208s] [ 23%] 2024-08-07T18:08:36.2378981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0207s] [ 23%] 2024-08-07T18:08:36.2380284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0250s] [ 23%] 2024-08-07T18:08:36.2381565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0248s] [ 23%] 2024-08-07T18:08:36.2382900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0229s] [ 23%] 2024-08-07T18:08:36.2384246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0227s] [ 23%] 2024-08-07T18:08:36.2385521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0138s] [ 23%] 2024-08-07T18:08:36.2386810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0150s] [ 23%] 2024-08-07T18:08:36.2388120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0147s] [ 23%] 2024-08-07T18:08:36.2389485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0147s] [ 23%] 2024-08-07T18:08:36.2390740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0204s] [ 23%] 2024-08-07T18:08:36.2392029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0206s] [ 23%] 2024-08-07T18:08:36.2393319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0165s] [ 23%] 2024-08-07T18:08:36.2394606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0165s] [ 23%] 2024-08-07T18:08:36.2396146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 23%] 2024-08-07T18:08:36.2397437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 23%] 2024-08-07T18:08:36.2398732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0071s] [ 23%] 2024-08-07T18:08:36.2400003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 23%] 2024-08-07T18:08:36.2401290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 23%] 2024-08-07T18:08:36.2402661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 23%] 2024-08-07T18:08:36.2404027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 23%] 2024-08-07T18:08:36.2405301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 23%] 2024-08-07T18:08:36.2406582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 23%] 2024-08-07T18:08:36.2407917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 23%] 2024-08-07T18:08:36.2409252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 23%] 2024-08-07T18:08:36.2410539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 23%] 2024-08-07T18:08:36.2411812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0085s] [ 23%] 2024-08-07T18:08:36.2413135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 23%] 2024-08-07T18:08:36.2414403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0079s] [ 23%] 2024-08-07T18:08:36.2415690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 23%] 2024-08-07T18:08:36.2416958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 23%] 2024-08-07T18:08:36.2418257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 23%] 2024-08-07T18:08:36.2419525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0084s] [ 23%] 2024-08-07T18:08:36.2420875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 23%] 2024-08-07T18:08:36.2422227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0098s] [ 23%] 2024-08-07T18:08:36.2423521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 23%] 2024-08-07T18:08:36.2424809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0091s] [ 23%] 2024-08-07T18:08:36.2426129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 23%] 2024-08-07T18:08:36.2427468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 23%] 2024-08-07T18:08:36.2428734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 23%] 2024-08-07T18:08:36.2430012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0071s] [ 23%] 2024-08-07T18:08:36.2431290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 23%] 2024-08-07T18:08:36.2432576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0088s] [ 23%] 2024-08-07T18:08:36.2433854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 23%] 2024-08-07T18:08:36.2435140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 23%] 2024-08-07T18:08:36.2436423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 23%] 2024-08-07T18:08:36.2437682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 23%] 2024-08-07T18:08:36.2438963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 23%] 2024-08-07T18:08:36.2440271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 23%] 2024-08-07T18:08:36.2441655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 23%] 2024-08-07T18:08:36.2442938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 23%] 2024-08-07T18:08:36.2444226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 23%] 2024-08-07T18:08:36.2445533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 23%] 2024-08-07T18:08:36.2446851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 23%] 2024-08-07T18:08:36.2448127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0073s] [ 23%] 2024-08-07T18:08:36.2449392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 23%] 2024-08-07T18:08:36.2450684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 23%] 2024-08-07T18:08:36.2451950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 23%] 2024-08-07T18:08:36.2453252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0088s] [ 23%] 2024-08-07T18:08:36.2454531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 23%] 2024-08-07T18:08:36.2455822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 23%] 2024-08-07T18:08:36.2457088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 23%] 2024-08-07T18:08:36.2458415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0082s] [ 23%] 2024-08-07T18:08:36.2459732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 23%] 2024-08-07T18:08:36.2460986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0082s] [ 23%] 2024-08-07T18:08:36.2462266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 24%] 2024-08-07T18:08:36.2463587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0093s] [ 24%] 2024-08-07T18:08:36.2464946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0093s] [ 24%] 2024-08-07T18:08:36.2466208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 24%] 2024-08-07T18:08:36.2467499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 24%] 2024-08-07T18:08:36.2468763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 24%] 2024-08-07T18:08:36.2470071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 24%] 2024-08-07T18:08:36.2471310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 24%] 2024-08-07T18:08:36.2472578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 24%] 2024-08-07T18:08:36.2473877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 24%] 2024-08-07T18:08:36.2475138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 24%] 2024-08-07T18:08:36.2476413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 24%] 2024-08-07T18:08:36.2477720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 24%] 2024-08-07T18:08:36.2479059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0266s] [ 24%] 2024-08-07T18:08:36.2480335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0284s] [ 24%] 2024-08-07T18:08:36.2481618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0272s] [ 24%] 2024-08-07T18:08:36.2482961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0257s] [ 24%] 2024-08-07T18:08:36.2484312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0383s] [ 24%] 2024-08-07T18:08:36.2485583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0396s] [ 24%] 2024-08-07T18:08:36.2486861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0297s] [ 24%] 2024-08-07T18:08:36.2488162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0292s] [ 24%] 2024-08-07T18:08:36.2489427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0291s] [ 24%] 2024-08-07T18:08:36.2490714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0314s] [ 24%] 2024-08-07T18:08:36.2491984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0309s] [ 24%] 2024-08-07T18:08:36.2493306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0307s] [ 24%] 2024-08-07T18:08:36.2494594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0400s] [ 24%] 2024-08-07T18:08:36.2496160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0416s] [ 24%] 2024-08-07T18:08:36.2497523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0340s] [ 24%] 2024-08-07T18:08:36.2498801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0327s] [ 24%] 2024-08-07T18:08:36.2500088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0333s] [ 24%] 2024-08-07T18:08:36.2501425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0376s] [ 24%] 2024-08-07T18:08:36.2502782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0370s] [ 24%] 2024-08-07T18:08:36.2504059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0368s] [ 24%] 2024-08-07T18:08:36.2505354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0448s] [ 24%] 2024-08-07T18:08:36.2506645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0456s] [ 24%] 2024-08-07T18:08:36.2507935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0393s] [ 24%] 2024-08-07T18:08:36.2509213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0400s] [ 24%] 2024-08-07T18:08:36.2510502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0254s] [ 24%] 2024-08-07T18:08:36.2511786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0269s] [ 24%] 2024-08-07T18:08:36.2513078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0256s] [ 24%] 2024-08-07T18:08:36.2514376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0264s] [ 24%] 2024-08-07T18:08:36.2515695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0376s] [ 24%] 2024-08-07T18:08:36.2517047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0390s] [ 24%] 2024-08-07T18:08:36.2518322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0280s] [ 24%] 2024-08-07T18:08:36.2519667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0280s] [ 24%] 2024-08-07T18:08:36.2521031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 24%] 2024-08-07T18:08:36.2522338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 24%] 2024-08-07T18:08:36.2523624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 24%] 2024-08-07T18:08:36.2524904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 24%] 2024-08-07T18:08:36.2526207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0093s] [ 24%] 2024-08-07T18:08:36.2527488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0094s] [ 24%] 2024-08-07T18:08:36.2528774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 24%] 2024-08-07T18:08:36.2530062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 24%] 2024-08-07T18:08:36.2531356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0081s] [ 24%] 2024-08-07T18:08:36.2532630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 24%] 2024-08-07T18:08:36.2533974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0084s] [ 24%] 2024-08-07T18:08:36.2535296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 24%] 2024-08-07T18:08:36.2536590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0096s] [ 24%] 2024-08-07T18:08:36.2537864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 24%] 2024-08-07T18:08:36.2539192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0095s] [ 24%] 2024-08-07T18:08:36.2540542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 24%] 2024-08-07T18:08:36.2541805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0090s] [ 24%] 2024-08-07T18:08:36.2543119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 24%] 2024-08-07T18:08:36.2544401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0098s] [ 24%] 2024-08-07T18:08:36.2545692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0098s] [ 24%] 2024-08-07T18:08:36.2546965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0102s] [ 24%] 2024-08-07T18:08:36.2548261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0104s] [ 24%] 2024-08-07T18:08:36.2549539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0106s] [ 24%] 2024-08-07T18:08:36.2550818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0102s] [ 24%] 2024-08-07T18:08:36.2552142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 24%] 2024-08-07T18:08:36.2553483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 24%] 2024-08-07T18:08:36.2554773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 24%] 2024-08-07T18:08:36.2556036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 24%] 2024-08-07T18:08:36.2557366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0091s] [ 24%] 2024-08-07T18:08:36.2558687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 24%] 2024-08-07T18:08:36.2559969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 24%] 2024-08-07T18:08:36.2561242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 24%] 2024-08-07T18:08:36.2562531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 25%] 2024-08-07T18:08:36.2563820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 25%] 2024-08-07T18:08:36.2565085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 25%] 2024-08-07T18:08:36.2566371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 25%] 2024-08-07T18:08:36.2567641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0089s] [ 25%] 2024-08-07T18:08:36.2568942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 25%] 2024-08-07T18:08:36.2570202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 25%] 2024-08-07T18:08:36.2571534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 25%] 2024-08-07T18:08:36.2572857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 25%] 2024-08-07T18:08:36.2574153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 25%] 2024-08-07T18:08:36.2575404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0076s] [ 25%] 2024-08-07T18:08:36.2576713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 25%] 2024-08-07T18:08:36.2578048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0100s] [ 25%] 2024-08-07T18:08:36.2579322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 25%] 2024-08-07T18:08:36.2580610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0081s] [ 25%] 2024-08-07T18:08:36.2581894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 25%] 2024-08-07T18:08:36.2583247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 25%] 2024-08-07T18:08:36.2584514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 25%] 2024-08-07T18:08:36.2585802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 25%] 2024-08-07T18:08:36.2587078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 25%] 2024-08-07T18:08:36.2588365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0098s] [ 25%] 2024-08-07T18:08:36.2589680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 25%] 2024-08-07T18:08:36.2591001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 25%] 2024-08-07T18:08:36.2592299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 25%] 2024-08-07T18:08:36.2593581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 25%] 2024-08-07T18:08:36.2594912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 25%] 2024-08-07T18:08:36.2596526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 25%] 2024-08-07T18:08:36.2597808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 25%] 2024-08-07T18:08:36.2599069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 25%] 2024-08-07T18:08:36.2600361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 25%] 2024-08-07T18:08:36.2601631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 25%] 2024-08-07T18:08:36.2602901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_152_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 25%] 2024-08-07T18:08:36.2604201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0127s] [ 25%] 2024-08-07T18:08:36.2605485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0134s] [ 25%] 2024-08-07T18:08:36.2606775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0129s] [ 25%] 2024-08-07T18:08:36.2608049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0128s] [ 25%] 2024-08-07T18:08:36.2609417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0179s] [ 25%] 2024-08-07T18:08:36.2610793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0182s] [ 25%] 2024-08-07T18:08:36.2612083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0145s] [ 25%] 2024-08-07T18:08:36.2613370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0144s] [ 25%] 2024-08-07T18:08:36.2614720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0134s] [ 25%] 2024-08-07T18:08:36.2616065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0140s] [ 25%] 2024-08-07T18:08:36.2617330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0142s] [ 25%] 2024-08-07T18:08:36.2618627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0142s] [ 25%] 2024-08-07T18:08:36.2619912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0188s] [ 25%] 2024-08-07T18:08:36.2621251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0187s] [ 25%] 2024-08-07T18:08:36.2622528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0157s] [ 25%] 2024-08-07T18:08:36.2623854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0164s] [ 25%] 2024-08-07T18:08:36.2625132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0150s] [ 25%] 2024-08-07T18:08:36.2626427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0162s] [ 25%] 2024-08-07T18:08:36.2627750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0171s] [ 25%] 2024-08-07T18:08:36.2629072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0166s] [ 25%] 2024-08-07T18:08:36.2630379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0204s] [ 25%] 2024-08-07T18:08:36.2631657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0205s] [ 25%] 2024-08-07T18:08:36.2632996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0182s] [ 25%] 2024-08-07T18:08:36.2634340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0184s] [ 25%] 2024-08-07T18:08:36.2635631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0121s] [ 25%] 2024-08-07T18:08:36.2636903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0125s] [ 25%] 2024-08-07T18:08:36.2638193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0122s] [ 25%] 2024-08-07T18:08:36.2639477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0123s] [ 25%] 2024-08-07T18:08:36.2640765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0172s] [ 25%] 2024-08-07T18:08:36.2642048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0175s] [ 25%] 2024-08-07T18:08:36.2643341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0138s] [ 25%] 2024-08-07T18:08:36.2644638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0138s] [ 25%] 2024-08-07T18:08:36.2645907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0206s] [ 25%] 2024-08-07T18:08:36.2647246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0218s] [ 25%] 2024-08-07T18:08:36.2648569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0207s] [ 25%] 2024-08-07T18:08:36.2649862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0205s] [ 25%] 2024-08-07T18:08:36.2651166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0311s] [ 25%] 2024-08-07T18:08:36.2652512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0314s] [ 25%] 2024-08-07T18:08:36.2653909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0242s] [ 25%] 2024-08-07T18:08:36.2655208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0240s] [ 25%] 2024-08-07T18:08:36.2656482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0217s] [ 25%] 2024-08-07T18:08:36.2657765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0232s] [ 25%] 2024-08-07T18:08:36.2659051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0228s] [ 25%] 2024-08-07T18:08:36.2660324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0232s] [ 25%] 2024-08-07T18:08:36.2661617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0316s] [ 25%] 2024-08-07T18:08:36.2662900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0326s] [ 25%] 2024-08-07T18:08:36.2664206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0267s] [ 26%] 2024-08-07T18:08:36.2665530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0264s] [ 26%] 2024-08-07T18:08:36.2666878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0244s] [ 26%] 2024-08-07T18:08:36.2668152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0262s] [ 26%] 2024-08-07T18:08:36.2669418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0273s] [ 26%] 2024-08-07T18:08:36.2670759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0268s] [ 26%] 2024-08-07T18:08:36.2672090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0343s] [ 26%] 2024-08-07T18:08:36.2673396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0352s] [ 26%] 2024-08-07T18:08:36.2674672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0304s] [ 26%] 2024-08-07T18:08:36.2675973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0311s] [ 26%] 2024-08-07T18:08:36.2677244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0194s] [ 26%] 2024-08-07T18:08:36.2678529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0203s] [ 26%] 2024-08-07T18:08:36.2679796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0195s] [ 26%] 2024-08-07T18:08:36.2681107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0192s] [ 26%] 2024-08-07T18:08:36.2682365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0306s] [ 26%] 2024-08-07T18:08:36.2683662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0316s] [ 26%] 2024-08-07T18:08:36.2685012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0227s] [ 26%] 2024-08-07T18:08:36.2686355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0226s] [ 26%] 2024-08-07T18:08:36.2687633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0089s] [ 26%] 2024-08-07T18:08:36.2688942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 26%] 2024-08-07T18:08:36.2690279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0086s] [ 26%] 2024-08-07T18:08:36.2691542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 26%] 2024-08-07T18:08:36.2692823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0111s] [ 26%] 2024-08-07T18:08:36.2694117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 26%] 2024-08-07T18:08:36.2695634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0092s] [ 26%] 2024-08-07T18:08:36.2696942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 26%] 2024-08-07T18:08:36.2698206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0092s] [ 26%] 2024-08-07T18:08:36.2699502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0094s] [ 26%] 2024-08-07T18:08:36.2700772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0095s] [ 26%] 2024-08-07T18:08:36.2702062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 26%] 2024-08-07T18:08:36.2703423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0115s] [ 26%] 2024-08-07T18:08:36.2704792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0118s] [ 26%] 2024-08-07T18:08:36.2706055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0102s] [ 26%] 2024-08-07T18:08:36.2707348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0100s] [ 26%] 2024-08-07T18:08:36.2708694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0104s] [ 26%] 2024-08-07T18:08:36.2710043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 26%] 2024-08-07T18:08:36.2711324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0110s] [ 26%] 2024-08-07T18:08:36.2712596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0108s] [ 26%] 2024-08-07T18:08:36.2713914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0127s] [ 26%] 2024-08-07T18:08:36.2715194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0127s] [ 26%] 2024-08-07T18:08:36.2716479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0116s] [ 26%] 2024-08-07T18:08:36.2717756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0116s] [ 26%] 2024-08-07T18:08:36.2719052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0079s] [ 26%] 2024-08-07T18:08:36.2720316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 26%] 2024-08-07T18:08:36.2721624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 26%] 2024-08-07T18:08:36.2722970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 26%] 2024-08-07T18:08:36.2724299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0108s] [ 26%] 2024-08-07T18:08:36.2725580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0110s] [ 26%] 2024-08-07T18:08:36.2726888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 26%] 2024-08-07T18:08:36.2728227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 26%] 2024-08-07T18:08:36.2729488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0081s] [ 26%] 2024-08-07T18:08:36.2730771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 26%] 2024-08-07T18:08:36.2732034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0077s] [ 26%] 2024-08-07T18:08:36.2733327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 26%] 2024-08-07T18:08:36.2734604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0104s] [ 26%] 2024-08-07T18:08:36.2735873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 26%] 2024-08-07T18:08:36.2737161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 26%] 2024-08-07T18:08:36.2738437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 26%] 2024-08-07T18:08:36.2739711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 26%] 2024-08-07T18:08:36.2741062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 26%] 2024-08-07T18:08:36.2742401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0085s] [ 26%] 2024-08-07T18:08:36.2743675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 26%] 2024-08-07T18:08:36.2744958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0105s] [ 26%] 2024-08-07T18:08:36.2746280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 26%] 2024-08-07T18:08:36.2747606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 26%] 2024-08-07T18:08:36.2748894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 26%] 2024-08-07T18:08:36.2750163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0097s] [ 26%] 2024-08-07T18:08:36.2751454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 26%] 2024-08-07T18:08:36.2752723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0095s] [ 26%] 2024-08-07T18:08:36.2754032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 26%] 2024-08-07T18:08:36.2755302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0114s] [ 26%] 2024-08-07T18:08:36.2756601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0115s] [ 26%] 2024-08-07T18:08:36.2757864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0100s] [ 26%] 2024-08-07T18:08:36.2759201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0097s] [ 26%] 2024-08-07T18:08:36.2760510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0074s] [ 26%] 2024-08-07T18:08:36.2761777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 26%] 2024-08-07T18:08:36.2763052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0075s] [ 26%] 2024-08-07T18:08:36.2764379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 27%] 2024-08-07T18:08:36.2765724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0097s] [ 27%] 2024-08-07T18:08:36.2766981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 27%] 2024-08-07T18:08:36.2768263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0080s] [ 27%] 2024-08-07T18:08:36.2769532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 27%] 2024-08-07T18:08:36.2770842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0373s] [ 27%] 2024-08-07T18:08:36.2772109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0405s] [ 27%] 2024-08-07T18:08:36.2773381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0360s] [ 27%] 2024-08-07T18:08:36.2774686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0370s] [ 27%] 2024-08-07T18:08:36.2775965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0576s] [ 27%] 2024-08-07T18:08:36.2777268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0593s] [ 27%] 2024-08-07T18:08:36.2778585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0434s] [ 27%] 2024-08-07T18:08:36.2779930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0436s] [ 27%] 2024-08-07T18:08:36.2781200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0392s] [ 27%] 2024-08-07T18:08:36.2782491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0428s] [ 27%] 2024-08-07T18:08:36.2783823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0412s] [ 27%] 2024-08-07T18:08:36.2785171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0402s] [ 27%] 2024-08-07T18:08:36.2786439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0588s] [ 27%] 2024-08-07T18:08:36.2787728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0613s] [ 27%] 2024-08-07T18:08:36.2789029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0476s] [ 27%] 2024-08-07T18:08:36.2790304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0477s] [ 27%] 2024-08-07T18:08:36.2791590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0450s] [ 27%] 2024-08-07T18:08:36.2792877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0482s] [ 27%] 2024-08-07T18:08:36.2794187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0478s] [ 27%] 2024-08-07T18:08:36.2795703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.4410s] [ 27%] 2024-08-07T18:08:36.2797082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0647s] [ 27%] 2024-08-07T18:08:36.2798432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0662s] [ 27%] 2024-08-07T18:08:36.2799704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0570s] [ 27%] 2024-08-07T18:08:36.2801002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0573s] [ 27%] 2024-08-07T18:08:36.2802320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0368s] [ 27%] 2024-08-07T18:08:36.2803678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0390s] [ 27%] 2024-08-07T18:08:36.2804948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0346s] [ 27%] 2024-08-07T18:08:36.2806237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0344s] [ 27%] 2024-08-07T18:08:36.2807513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0567s] [ 27%] 2024-08-07T18:08:36.2808818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0584s] [ 27%] 2024-08-07T18:08:36.2810083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0413s] [ 27%] 2024-08-07T18:08:36.2811378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0416s] [ 27%] 2024-08-07T18:08:36.2812650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0093s] [ 27%] 2024-08-07T18:08:36.2813942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 27%] 2024-08-07T18:08:36.2815245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0098s] [ 27%] 2024-08-07T18:08:36.2816564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0098s] [ 27%] 2024-08-07T18:08:36.2817904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0120s] [ 27%] 2024-08-07T18:08:36.2819175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0121s] [ 27%] 2024-08-07T18:08:36.2820459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0110s] [ 27%] 2024-08-07T18:08:36.2821835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0107s] [ 27%] 2024-08-07T18:08:36.2823181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0099s] [ 27%] 2024-08-07T18:08:36.2824474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0103s] [ 27%] 2024-08-07T18:08:36.2825739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0106s] [ 27%] 2024-08-07T18:08:36.2827042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0103s] [ 27%] 2024-08-07T18:08:36.2828311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0133s] [ 27%] 2024-08-07T18:08:36.2829603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0129s] [ 27%] 2024-08-07T18:08:36.2830877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0117s] [ 27%] 2024-08-07T18:08:36.2832181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0113s] [ 27%] 2024-08-07T18:08:36.2833445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0110s] [ 27%] 2024-08-07T18:08:36.2834797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0111s] [ 27%] 2024-08-07T18:08:36.2836120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0120s] [ 27%] 2024-08-07T18:08:36.2837413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0113s] [ 27%] 2024-08-07T18:08:36.2838685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0136s] [ 27%] 2024-08-07T18:08:36.2840006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0133s] [ 27%] 2024-08-07T18:08:36.2841350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0129s] [ 27%] 2024-08-07T18:08:36.2842629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0124s] [ 27%] 2024-08-07T18:08:36.2843920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0086s] [ 27%] 2024-08-07T18:08:36.2845197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 27%] 2024-08-07T18:08:36.2846482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0088s] [ 27%] 2024-08-07T18:08:36.2847762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 27%] 2024-08-07T18:08:36.2849031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0114s] [ 27%] 2024-08-07T18:08:36.2850310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0113s] [ 27%] 2024-08-07T18:08:36.2851585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0097s] [ 27%] 2024-08-07T18:08:36.2852862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0096s] [ 27%] 2024-08-07T18:08:36.2854198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0082s] [ 27%] 2024-08-07T18:08:36.2855537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 27%] 2024-08-07T18:08:36.2856797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0080s] [ 27%] 2024-08-07T18:08:36.2858080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 27%] 2024-08-07T18:08:36.2859394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0106s] [ 27%] 2024-08-07T18:08:36.2860741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 27%] 2024-08-07T18:08:36.2862001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 27%] 2024-08-07T18:08:36.2863297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 27%] 2024-08-07T18:08:36.2864589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 27%] 2024-08-07T18:08:36.2865859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 28%] 2024-08-07T18:08:36.2867137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 28%] 2024-08-07T18:08:36.2868406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 28%] 2024-08-07T18:08:36.2869700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0108s] [ 28%] 2024-08-07T18:08:36.2870969Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 28%] 2024-08-07T18:08:36.2872291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 28%] 2024-08-07T18:08:36.2873620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 28%] 2024-08-07T18:08:36.2874898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0095s] [ 28%] 2024-08-07T18:08:36.2876167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0096s] [ 28%] 2024-08-07T18:08:36.2877472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0095s] [ 28%] 2024-08-07T18:08:36.2878811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0096s] [ 28%] 2024-08-07T18:08:36.2880072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0116s] [ 28%] 2024-08-07T18:08:36.2881366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0116s] [ 28%] 2024-08-07T18:08:36.2882635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0098s] [ 28%] 2024-08-07T18:08:36.2883950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0098s] [ 28%] 2024-08-07T18:08:36.2885207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0075s] [ 28%] 2024-08-07T18:08:36.2886499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 28%] 2024-08-07T18:08:36.2887772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0075s] [ 28%] 2024-08-07T18:08:36.2889044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 28%] 2024-08-07T18:08:36.2890306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0099s] [ 28%] 2024-08-07T18:08:36.2891620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0099s] [ 28%] 2024-08-07T18:08:36.2892982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 28%] 2024-08-07T18:08:36.2894270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 28%] 2024-08-07T18:08:36.2895803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 28%] 2024-08-07T18:08:36.2897180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 28%] 2024-08-07T18:08:36.2898534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 28%] 2024-08-07T18:08:36.2899795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 28%] 2024-08-07T18:08:36.2901079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 28%] 2024-08-07T18:08:36.2902368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 28%] 2024-08-07T18:08:36.2903625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 28%] 2024-08-07T18:08:36.2904925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 28%] 2024-08-07T18:08:36.2906189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 28%] 2024-08-07T18:08:36.2907480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 28%] 2024-08-07T18:08:36.2908740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 28%] 2024-08-07T18:08:36.2910079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 28%] 2024-08-07T18:08:36.2911423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 28%] 2024-08-07T18:08:36.2912699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 28%] 2024-08-07T18:08:36.2913976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 28%] 2024-08-07T18:08:36.2915312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 28%] 2024-08-07T18:08:36.2916628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 28%] 2024-08-07T18:08:36.2917896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 28%] 2024-08-07T18:08:36.2919171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 28%] 2024-08-07T18:08:36.2920440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 28%] 2024-08-07T18:08:36.2921788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 28%] 2024-08-07T18:08:36.2923051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 28%] 2024-08-07T18:08:36.2924373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 28%] 2024-08-07T18:08:36.2925647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 28%] 2024-08-07T18:08:36.2926919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 28%] 2024-08-07T18:08:36.2928177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 28%] 2024-08-07T18:08:36.2929478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 28%] 2024-08-07T18:08:36.2930811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 28%] 2024-08-07T18:08:36.2932069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 28%] 2024-08-07T18:08:36.2933348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 28%] 2024-08-07T18:08:36.2934679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 28%] 2024-08-07T18:08:36.2936060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 28%] 2024-08-07T18:08:36.2937313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 28%] 2024-08-07T18:08:36.2938593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 28%] 2024-08-07T18:08:36.2939865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 28%] 2024-08-07T18:08:36.2941148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 28%] 2024-08-07T18:08:36.2942407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 28%] 2024-08-07T18:08:36.2943677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 28%] 2024-08-07T18:08:36.2944994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 28%] 2024-08-07T18:08:36.2946260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 28%] 2024-08-07T18:08:36.2947579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 28%] 2024-08-07T18:08:36.2948891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 28%] 2024-08-07T18:08:36.2950176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 28%] 2024-08-07T18:08:36.2951432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 28%] 2024-08-07T18:08:36.2952805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 28%] 2024-08-07T18:08:36.2954151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 28%] 2024-08-07T18:08:36.2955415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0081s] [ 28%] 2024-08-07T18:08:36.2956707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 28%] 2024-08-07T18:08:36.2957961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0093s] [ 28%] 2024-08-07T18:08:36.2959253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 28%] 2024-08-07T18:08:36.2960510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0077s] [ 28%] 2024-08-07T18:08:36.2961793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 28%] 2024-08-07T18:08:36.2963067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0099s] [ 28%] 2024-08-07T18:08:36.2964372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 28%] 2024-08-07T18:08:36.2965631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 29%] 2024-08-07T18:08:36.2966971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 29%] 2024-08-07T18:08:36.2968284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 29%] 2024-08-07T18:08:36.2969545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 29%] 2024-08-07T18:08:36.2970818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 29%] 2024-08-07T18:08:36.2972124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 29%] 2024-08-07T18:08:36.2973459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 29%] 2024-08-07T18:08:36.2974742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 29%] 2024-08-07T18:08:36.2976024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 29%] 2024-08-07T18:08:36.2977299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 29%] 2024-08-07T18:08:36.2978569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 29%] 2024-08-07T18:08:36.2979830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 29%] 2024-08-07T18:08:36.2981088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 29%] 2024-08-07T18:08:36.2982392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 29%] 2024-08-07T18:08:36.2983647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 29%] 2024-08-07T18:08:36.2984982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 29%] 2024-08-07T18:08:36.2986291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 29%] 2024-08-07T18:08:36.2987585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 29%] 2024-08-07T18:08:36.2988836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 29%] 2024-08-07T18:08:36.2990157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 29%] 2024-08-07T18:08:36.2991473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 29%] 2024-08-07T18:08:36.2992732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 29%] 2024-08-07T18:08:36.2994020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 29%] 2024-08-07T18:08:36.2995546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 29%] 2024-08-07T18:08:36.2996854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 29%] 2024-08-07T18:08:36.2998117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 29%] 2024-08-07T18:08:36.2999389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 29%] 2024-08-07T18:08:36.3000654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 29%] 2024-08-07T18:08:36.3001931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 29%] 2024-08-07T18:08:36.3003184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 29%] 2024-08-07T18:08:36.3004555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 29%] 2024-08-07T18:08:36.3005898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 29%] 2024-08-07T18:08:36.3007153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 29%] 2024-08-07T18:08:36.3008433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 29%] 2024-08-07T18:08:36.3009740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 29%] 2024-08-07T18:08:36.3011083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 29%] 2024-08-07T18:08:36.3012329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 29%] 2024-08-07T18:08:36.3013610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 29%] 2024-08-07T18:08:36.3014900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 29%] 2024-08-07T18:08:36.3016174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 29%] 2024-08-07T18:08:36.3017428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 29%] 2024-08-07T18:08:36.3018689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 29%] 2024-08-07T18:08:36.3019967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 29%] 2024-08-07T18:08:36.3021267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 29%] 2024-08-07T18:08:36.3022583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 29%] 2024-08-07T18:08:36.3023888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 29%] 2024-08-07T18:08:36.3025181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 29%] 2024-08-07T18:08:36.3026436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 29%] 2024-08-07T18:08:36.3027749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 29%] 2024-08-07T18:08:36.3029115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 29%] 2024-08-07T18:08:36.3030381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 29%] 2024-08-07T18:08:36.3031635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 29%] 2024-08-07T18:08:36.3032895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 29%] 2024-08-07T18:08:36.3034177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 29%] 2024-08-07T18:08:36.3035433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 29%] 2024-08-07T18:08:36.3036712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 29%] 2024-08-07T18:08:36.3037971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 29%] 2024-08-07T18:08:36.3039252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 29%] 2024-08-07T18:08:36.3040499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 29%] 2024-08-07T18:08:36.3041817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 29%] 2024-08-07T18:08:36.3043122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 29%] 2024-08-07T18:08:36.3044389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 29%] 2024-08-07T18:08:36.3045663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 29%] 2024-08-07T18:08:36.3046964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 29%] 2024-08-07T18:08:36.3048293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 29%] 2024-08-07T18:08:36.3049549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 29%] 2024-08-07T18:08:36.3050813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 29%] 2024-08-07T18:08:36.3052071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 29%] 2024-08-07T18:08:36.3053338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 29%] 2024-08-07T18:08:36.3054648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 29%] 2024-08-07T18:08:36.3055907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 29%] 2024-08-07T18:08:36.3057209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 29%] 2024-08-07T18:08:36.3058444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 29%] 2024-08-07T18:08:36.3059721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 29%] 2024-08-07T18:08:36.3061025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0105s] [ 29%] 2024-08-07T18:08:36.3062366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0101s] [ 29%] 2024-08-07T18:08:36.3063624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 29%] 2024-08-07T18:08:36.3064929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 30%] 2024-08-07T18:08:36.3066243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0107s] [ 30%] 2024-08-07T18:08:36.3067580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 30%] 2024-08-07T18:08:36.3068843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 30%] 2024-08-07T18:08:36.3070114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 30%] 2024-08-07T18:08:36.3071404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0115s] [ 30%] 2024-08-07T18:08:36.3072670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 30%] 2024-08-07T18:08:36.3073946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0080s] [ 30%] 2024-08-07T18:08:36.3075235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 30%] 2024-08-07T18:08:36.3076522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0120s] [ 30%] 2024-08-07T18:08:36.3077791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0127s] [ 30%] 2024-08-07T18:08:36.3079113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 30%] 2024-08-07T18:08:36.3080433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 30%] 2024-08-07T18:08:36.3081693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0132s] [ 30%] 2024-08-07T18:08:36.3082978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0140s] [ 30%] 2024-08-07T18:08:36.3084287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0099s] [ 30%] 2024-08-07T18:08:36.3085631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0100s] [ 30%] 2024-08-07T18:08:36.3086909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0147s] [ 30%] 2024-08-07T18:08:36.3088201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0152s] [ 30%] 2024-08-07T18:08:36.3089470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0110s] [ 30%] 2024-08-07T18:08:36.3090764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0109s] [ 30%] 2024-08-07T18:08:36.3092019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0092s] [ 30%] 2024-08-07T18:08:36.3093304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 30%] 2024-08-07T18:08:36.3094595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 30%] 2024-08-07T18:08:36.3096100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 30%] 2024-08-07T18:08:36.3097394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0108s] [ 30%] 2024-08-07T18:08:36.3098733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 30%] 2024-08-07T18:08:36.3100083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 30%] 2024-08-07T18:08:36.3101352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 30%] 2024-08-07T18:08:36.3102630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 30%] 2024-08-07T18:08:36.3103961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 30%] 2024-08-07T18:08:36.3105327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 30%] 2024-08-07T18:08:36.3106584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 30%] 2024-08-07T18:08:36.3107850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 30%] 2024-08-07T18:08:36.3109145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 30%] 2024-08-07T18:08:36.3110400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 30%] 2024-08-07T18:08:36.3111679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 30%] 2024-08-07T18:08:36.3112940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 30%] 2024-08-07T18:08:36.3114232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 30%] 2024-08-07T18:08:36.3115501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 30%] 2024-08-07T18:08:36.3116820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 30%] 2024-08-07T18:08:36.3118824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 30%] 2024-08-07T18:08:36.3120113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 30%] 2024-08-07T18:08:36.3121416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 30%] 2024-08-07T18:08:36.3122742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 30%] 2024-08-07T18:08:36.3124071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 30%] 2024-08-07T18:08:36.3125344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 30%] 2024-08-07T18:08:36.3126636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 30%] 2024-08-07T18:08:36.3127882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 30%] 2024-08-07T18:08:36.3129170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 30%] 2024-08-07T18:08:36.3130428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 30%] 2024-08-07T18:08:36.3131703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 30%] 2024-08-07T18:08:36.3132978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 30%] 2024-08-07T18:08:36.3134228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 30%] 2024-08-07T18:08:36.3135522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 30%] 2024-08-07T18:08:36.3136823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 30%] 2024-08-07T18:08:36.3138154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 30%] 2024-08-07T18:08:36.3139405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 30%] 2024-08-07T18:08:36.3140682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 30%] 2024-08-07T18:08:36.3141977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 30%] 2024-08-07T18:08:36.3143303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 30%] 2024-08-07T18:08:36.3144552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 30%] 2024-08-07T18:08:36.3145895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 30%] 2024-08-07T18:08:36.3147172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 30%] 2024-08-07T18:08:36.3148430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 30%] 2024-08-07T18:08:36.3149700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 30%] 2024-08-07T18:08:36.3150966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 30%] 2024-08-07T18:08:36.3152248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 30%] 2024-08-07T18:08:36.3153504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 30%] 2024-08-07T18:08:36.3154817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 30%] 2024-08-07T18:08:36.3156137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 30%] 2024-08-07T18:08:36.3157416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 30%] 2024-08-07T18:08:36.3158672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 30%] 2024-08-07T18:08:36.3159971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 30%] 2024-08-07T18:08:36.3161306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 30%] 2024-08-07T18:08:36.3162565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 30%] 2024-08-07T18:08:36.3163843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 30%] 2024-08-07T18:08:36.3165119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 30%] 2024-08-07T18:08:36.3166410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 31%] 2024-08-07T18:08:36.3167658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 31%] 2024-08-07T18:08:36.3168936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 31%] 2024-08-07T18:08:36.3170194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 31%] 2024-08-07T18:08:36.3171457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 31%] 2024-08-07T18:08:36.3172731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 31%] 2024-08-07T18:08:36.3174036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 31%] 2024-08-07T18:08:36.3175384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 31%] 2024-08-07T18:08:36.3176637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 31%] 2024-08-07T18:08:36.3177895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 31%] 2024-08-07T18:08:36.3179196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 31%] 2024-08-07T18:08:36.3180509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 31%] 2024-08-07T18:08:36.3181760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 31%] 2024-08-07T18:08:36.3183032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 31%] 2024-08-07T18:08:36.3184295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 31%] 2024-08-07T18:08:36.3185595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0205s] [ 31%] 2024-08-07T18:08:36.3186887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0217s] [ 31%] 2024-08-07T18:08:36.3188156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0211s] [ 31%] 2024-08-07T18:08:36.3189459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0210s] [ 31%] 2024-08-07T18:08:36.3190731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0309s] [ 31%] 2024-08-07T18:08:36.3192025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0314s] [ 31%] 2024-08-07T18:08:36.3193343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0251s] [ 31%] 2024-08-07T18:08:36.3194699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0250s] [ 31%] 2024-08-07T18:08:36.3196260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0213s] [ 31%] 2024-08-07T18:08:36.3197631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0229s] [ 31%] 2024-08-07T18:08:36.3198997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0229s] [ 31%] 2024-08-07T18:08:36.3200279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0230s] [ 31%] 2024-08-07T18:08:36.3201571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0323s] [ 31%] 2024-08-07T18:08:36.3202865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0329s] [ 31%] 2024-08-07T18:08:36.3204161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0278s] [ 31%] 2024-08-07T18:08:36.3205447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0276s] [ 31%] 2024-08-07T18:08:36.3206741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0243s] [ 31%] 2024-08-07T18:08:36.3208024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0262s] [ 31%] 2024-08-07T18:08:36.3209323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0272s] [ 31%] 2024-08-07T18:08:36.3210599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0280s] [ 31%] 2024-08-07T18:08:36.3211931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0356s] [ 31%] 2024-08-07T18:08:36.3213300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0354s] [ 31%] 2024-08-07T18:08:36.3214578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0317s] [ 31%] 2024-08-07T18:08:36.3215899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0317s] [ 31%] 2024-08-07T18:08:36.3217215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0197s] [ 31%] 2024-08-07T18:08:36.3218563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0204s] [ 31%] 2024-08-07T18:08:36.3219825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0193s] [ 31%] 2024-08-07T18:08:36.3221168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0192s] [ 31%] 2024-08-07T18:08:36.3222457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0302s] [ 31%] 2024-08-07T18:08:36.3223739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0306s] [ 31%] 2024-08-07T18:08:36.3225025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0234s] [ 31%] 2024-08-07T18:08:36.3226324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0239s] [ 31%] 2024-08-07T18:08:36.3227624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0359s] [ 31%] 2024-08-07T18:08:36.3228918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0383s] [ 31%] 2024-08-07T18:08:36.3230224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0359s] [ 31%] 2024-08-07T18:08:36.3231557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0363s] [ 31%] 2024-08-07T18:08:36.3232850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0583s] [ 31%] 2024-08-07T18:08:36.3234131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0594s] [ 31%] 2024-08-07T18:08:36.3235487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0442s] [ 31%] 2024-08-07T18:08:36.3236821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0445s] [ 31%] 2024-08-07T18:08:36.3238092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0382s] [ 31%] 2024-08-07T18:08:36.3239381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0411s] [ 31%] 2024-08-07T18:08:36.3240656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0404s] [ 31%] 2024-08-07T18:08:36.3241958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0402s] [ 31%] 2024-08-07T18:08:36.3243231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0596s] [ 31%] 2024-08-07T18:08:36.3244528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.4540s] [ 31%] 2024-08-07T18:08:36.3245825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0495s] [ 31%] 2024-08-07T18:08:36.3247122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0489s] [ 31%] 2024-08-07T18:08:36.3248384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0443s] [ 31%] 2024-08-07T18:08:36.3249710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0481s] [ 31%] 2024-08-07T18:08:36.3251055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0488s] [ 31%] 2024-08-07T18:08:36.3252328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0486s] [ 31%] 2024-08-07T18:08:36.3253670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0661s] [ 31%] 2024-08-07T18:08:36.3255010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0663s] [ 31%] 2024-08-07T18:08:36.3256372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0571s] [ 31%] 2024-08-07T18:08:36.3257647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0582s] [ 31%] 2024-08-07T18:08:36.3258936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0348s] [ 31%] 2024-08-07T18:08:36.3260219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0369s] [ 31%] 2024-08-07T18:08:36.3261506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0342s] [ 31%] 2024-08-07T18:08:36.3262777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0345s] [ 31%] 2024-08-07T18:08:36.3264050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0576s] [ 31%] 2024-08-07T18:08:36.3265368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0588s] [ 31%] 2024-08-07T18:08:36.3266643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0421s] [ 32%] 2024-08-07T18:08:36.3267984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0422s] [ 32%] 2024-08-07T18:08:36.3269305Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0129s] [ 32%] 2024-08-07T18:08:36.3270594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0123s] [ 32%] 2024-08-07T18:08:36.3271898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0124s] [ 32%] 2024-08-07T18:08:36.3273236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0126s] [ 32%] 2024-08-07T18:08:36.3274561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0173s] [ 32%] 2024-08-07T18:08:36.3275851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0179s] [ 32%] 2024-08-07T18:08:36.3277136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0139s] [ 32%] 2024-08-07T18:08:36.3278419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0130s] [ 32%] 2024-08-07T18:08:36.3279706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0128s] [ 32%] 2024-08-07T18:08:36.3280973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0129s] [ 32%] 2024-08-07T18:08:36.3282262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0130s] [ 32%] 2024-08-07T18:08:36.3283545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0131s] [ 32%] 2024-08-07T18:08:36.3284836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0180s] [ 32%] 2024-08-07T18:08:36.3286128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0177s] [ 32%] 2024-08-07T18:08:36.3287461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0138s] [ 32%] 2024-08-07T18:08:36.3288794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0138s] [ 32%] 2024-08-07T18:08:36.3290057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0147s] [ 32%] 2024-08-07T18:08:36.3291359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0149s] [ 32%] 2024-08-07T18:08:36.3292661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0156s] [ 32%] 2024-08-07T18:08:36.3294006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0155s] [ 32%] 2024-08-07T18:08:36.3295501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0197s] [ 32%] 2024-08-07T18:08:36.3296838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0200s] [ 32%] 2024-08-07T18:08:36.3298120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0165s] [ 32%] 2024-08-07T18:08:36.3299414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0166s] [ 32%] 2024-08-07T18:08:36.3300671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0109s] [ 32%] 2024-08-07T18:08:36.3301942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 32%] 2024-08-07T18:08:36.3303235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0103s] [ 32%] 2024-08-07T18:08:36.3304508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0105s] [ 32%] 2024-08-07T18:08:36.3305880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0159s] [ 32%] 2024-08-07T18:08:36.3307226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0160s] [ 32%] 2024-08-07T18:08:36.3308506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0119s] [ 32%] 2024-08-07T18:08:36.3309771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0118s] [ 32%] 2024-08-07T18:08:36.3311117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0097s] [ 32%] 2024-08-07T18:08:36.3312472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 32%] 2024-08-07T18:08:36.3313752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0087s] [ 32%] 2024-08-07T18:08:36.3315025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 32%] 2024-08-07T18:08:36.3316322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0136s] [ 32%] 2024-08-07T18:08:36.3317625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0136s] [ 32%] 2024-08-07T18:08:36.3318888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0099s] [ 32%] 2024-08-07T18:08:36.3320185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0101s] [ 32%] 2024-08-07T18:08:36.3321498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0117s] [ 32%] 2024-08-07T18:08:36.3322789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 32%] 2024-08-07T18:08:36.3324046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0114s] [ 32%] 2024-08-07T18:08:36.3325376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0105s] [ 32%] 2024-08-07T18:08:36.3326714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0151s] [ 32%] 2024-08-07T18:08:36.3327984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0146s] [ 32%] 2024-08-07T18:08:36.3329320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0115s] [ 32%] 2024-08-07T18:08:36.3330679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0114s] [ 32%] 2024-08-07T18:08:36.3332013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0138s] [ 32%] 2024-08-07T18:08:36.3333278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0141s] [ 32%] 2024-08-07T18:08:36.3334565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0139s] [ 32%] 2024-08-07T18:08:36.3335850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0139s] [ 32%] 2024-08-07T18:08:36.3337136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0172s] [ 32%] 2024-08-07T18:08:36.3338408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0175s] [ 32%] 2024-08-07T18:08:36.3339695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0145s] [ 32%] 2024-08-07T18:08:36.3340979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0143s] [ 32%] 2024-08-07T18:08:36.3342236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0102s] [ 32%] 2024-08-07T18:08:36.3343565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 32%] 2024-08-07T18:08:36.3344878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0092s] [ 32%] 2024-08-07T18:08:36.3346183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 32%] 2024-08-07T18:08:36.3347443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0137s] [ 32%] 2024-08-07T18:08:36.3348774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0131s] [ 32%] 2024-08-07T18:08:36.3350084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0098s] [ 32%] 2024-08-07T18:08:36.3351379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 32%] 2024-08-07T18:08:36.3352647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0692s] [ 32%] 2024-08-07T18:08:36.3353934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0756s] [ 32%] 2024-08-07T18:08:36.3355226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0658s] [ 32%] 2024-08-07T18:08:36.3356522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0657s] [ 32%] 2024-08-07T18:08:36.3357816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.4913s] [ 32%] 2024-08-07T18:08:36.3359112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1173s] [ 32%] 2024-08-07T18:08:36.3360390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0840s] [ 32%] 2024-08-07T18:08:36.3361665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0839s] [ 32%] 2024-08-07T18:08:36.3362997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0741s] [ 32%] 2024-08-07T18:08:36.3364324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0813s] [ 32%] 2024-08-07T18:08:36.3365625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0752s] [ 32%] 2024-08-07T18:08:36.3366942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0760s] [ 32%] 2024-08-07T18:08:36.3368269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.1156s] [ 33%] 2024-08-07T18:08:36.3369564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1195s] [ 33%] 2024-08-07T18:08:36.3370839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0912s] [ 33%] 2024-08-07T18:08:36.3372142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0931s] [ 33%] 2024-08-07T18:08:36.3373424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0835s] [ 33%] 2024-08-07T18:08:36.3374723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0909s] [ 33%] 2024-08-07T18:08:36.3376003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0892s] [ 33%] 2024-08-07T18:08:36.3377302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.4684s] [ 33%] 2024-08-07T18:08:36.3378578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.1237s] [ 33%] 2024-08-07T18:08:36.3379854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1273s] [ 33%] 2024-08-07T18:08:36.3381185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.1055s] [ 33%] 2024-08-07T18:08:36.3382519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.1056s] [ 33%] 2024-08-07T18:08:36.3383798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0681s] [ 33%] 2024-08-07T18:08:36.3385068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0725s] [ 33%] 2024-08-07T18:08:36.3386440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0638s] [ 33%] 2024-08-07T18:08:36.3387763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0644s] [ 33%] 2024-08-07T18:08:36.3389054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.1116s] [ 33%] 2024-08-07T18:08:36.3390332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1147s] [ 33%] 2024-08-07T18:08:36.3391624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0802s] [ 33%] 2024-08-07T18:08:36.3392901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0803s] [ 33%] 2024-08-07T18:08:36.3394159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0140s] [ 33%] 2024-08-07T18:08:36.3395727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0139s] [ 33%] 2024-08-07T18:08:36.3397038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0139s] [ 33%] 2024-08-07T18:08:36.3398330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0140s] [ 33%] 2024-08-07T18:08:36.3399662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0192s] [ 33%] 2024-08-07T18:08:36.3401028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0192s] [ 33%] 2024-08-07T18:08:36.3402302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0157s] [ 33%] 2024-08-07T18:08:36.3403590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0158s] [ 33%] 2024-08-07T18:08:36.3404910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0145s] [ 33%] 2024-08-07T18:08:36.3406283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0148s] [ 33%] 2024-08-07T18:08:36.3407545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0156s] [ 33%] 2024-08-07T18:08:36.3408814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0156s] [ 33%] 2024-08-07T18:08:36.3410103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0201s] [ 33%] 2024-08-07T18:08:36.3411391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.4066s] [ 33%] 2024-08-07T18:08:36.3412676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0176s] [ 33%] 2024-08-07T18:08:36.3413952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0178s] [ 33%] 2024-08-07T18:08:36.3415238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0170s] [ 33%] 2024-08-07T18:08:36.3416537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0177s] [ 33%] 2024-08-07T18:08:36.3417820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0188s] [ 33%] 2024-08-07T18:08:36.3419140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0187s] [ 33%] 2024-08-07T18:08:36.3420462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0220s] [ 33%] 2024-08-07T18:08:36.3421792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0221s] [ 33%] 2024-08-07T18:08:36.3423068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0210s] [ 33%] 2024-08-07T18:08:36.3424407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0202s] [ 33%] 2024-08-07T18:08:36.3425732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0128s] [ 33%] 2024-08-07T18:08:36.3427036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0133s] [ 33%] 2024-08-07T18:08:36.3428304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0132s] [ 33%] 2024-08-07T18:08:36.3429605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0133s] [ 33%] 2024-08-07T18:08:36.3430867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0187s] [ 33%] 2024-08-07T18:08:36.3432157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0190s] [ 33%] 2024-08-07T18:08:36.3433428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0151s] [ 33%] 2024-08-07T18:08:36.3434713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0155s] [ 33%] 2024-08-07T18:08:36.3436007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0117s] [ 33%] 2024-08-07T18:08:36.3437318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0116s] [ 33%] 2024-08-07T18:08:36.3438650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0109s] [ 33%] 2024-08-07T18:08:36.3439923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0108s] [ 33%] 2024-08-07T18:08:36.3441201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0160s] [ 33%] 2024-08-07T18:08:36.3442529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0159s] [ 33%] 2024-08-07T18:08:36.3443866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0117s] [ 33%] 2024-08-07T18:08:36.3445141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0118s] [ 33%] 2024-08-07T18:08:36.3446426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0123s] [ 33%] 2024-08-07T18:08:36.3447718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0123s] [ 33%] 2024-08-07T18:08:36.3448985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0120s] [ 33%] 2024-08-07T18:08:36.3450271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0120s] [ 33%] 2024-08-07T18:08:36.3451542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0167s] [ 33%] 2024-08-07T18:08:36.3452828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0158s] [ 33%] 2024-08-07T18:08:36.3454094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0119s] [ 33%] 2024-08-07T18:08:36.3455377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0119s] [ 33%] 2024-08-07T18:08:36.3456705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0146s] [ 33%] 2024-08-07T18:08:36.3458045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0146s] [ 33%] 2024-08-07T18:08:36.3459326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0142s] [ 33%] 2024-08-07T18:08:36.3460597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0142s] [ 33%] 2024-08-07T18:08:36.3461922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0182s] [ 33%] 2024-08-07T18:08:36.3463246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0183s] [ 33%] 2024-08-07T18:08:36.3464528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0145s] [ 33%] 2024-08-07T18:08:36.3465802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0153s] [ 33%] 2024-08-07T18:08:36.3467100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0101s] [ 33%] 2024-08-07T18:08:36.3468364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 34%] 2024-08-07T18:08:36.3469635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0094s] [ 34%] 2024-08-07T18:08:36.3470900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 34%] 2024-08-07T18:08:36.3472173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0143s] [ 34%] 2024-08-07T18:08:36.3473459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0144s] [ 34%] 2024-08-07T18:08:36.3474763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0101s] [ 34%] 2024-08-07T18:08:36.3476161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0102s] [ 34%] 2024-08-07T18:08:36.3477428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0070s] [ 34%] 2024-08-07T18:08:36.3478719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 34%] 2024-08-07T18:08:36.3480023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 34%] 2024-08-07T18:08:36.3481363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 34%] 2024-08-07T18:08:36.3482626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0086s] [ 34%] 2024-08-07T18:08:36.3483896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 34%] 2024-08-07T18:08:36.3485182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 34%] 2024-08-07T18:08:36.3486491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 34%] 2024-08-07T18:08:36.3487774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 34%] 2024-08-07T18:08:36.3489047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 34%] 2024-08-07T18:08:36.3490337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 34%] 2024-08-07T18:08:36.3491603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 34%] 2024-08-07T18:08:36.3492885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0090s] [ 34%] 2024-08-07T18:08:36.3494203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0093s] [ 34%] 2024-08-07T18:08:36.3495895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0091s] [ 34%] 2024-08-07T18:08:36.3497200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 34%] 2024-08-07T18:08:36.3498475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0089s] [ 34%] 2024-08-07T18:08:36.3499851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 34%] 2024-08-07T18:08:36.3501182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0091s] [ 34%] 2024-08-07T18:08:36.3502470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 34%] 2024-08-07T18:08:36.3503743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0106s] [ 34%] 2024-08-07T18:08:36.3505048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 34%] 2024-08-07T18:08:36.3506312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0101s] [ 34%] 2024-08-07T18:08:36.3507607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0100s] [ 34%] 2024-08-07T18:08:36.3508871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0070s] [ 34%] 2024-08-07T18:08:36.3510157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 34%] 2024-08-07T18:08:36.3511424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 34%] 2024-08-07T18:08:36.3512749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 34%] 2024-08-07T18:08:36.3514160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0088s] [ 34%] 2024-08-07T18:08:36.3515436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 34%] 2024-08-07T18:08:36.3516761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0079s] [ 34%] 2024-08-07T18:08:36.3518062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 34%] 2024-08-07T18:08:36.3519398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0093s] [ 34%] 2024-08-07T18:08:36.3520660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 34%] 2024-08-07T18:08:36.3521985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0092s] [ 34%] 2024-08-07T18:08:36.3523265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 34%] 2024-08-07T18:08:36.3524548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0120s] [ 34%] 2024-08-07T18:08:36.3525840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0121s] [ 34%] 2024-08-07T18:08:36.3527140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0103s] [ 34%] 2024-08-07T18:08:36.3528442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0101s] [ 34%] 2024-08-07T18:08:36.3529702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0096s] [ 34%] 2024-08-07T18:08:36.3530987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 34%] 2024-08-07T18:08:36.3532313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0101s] [ 34%] 2024-08-07T18:08:36.3533658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0099s] [ 34%] 2024-08-07T18:08:36.3534922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0124s] [ 34%] 2024-08-07T18:08:36.3536199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0125s] [ 34%] 2024-08-07T18:08:36.3537541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0112s] [ 34%] 2024-08-07T18:08:36.3538864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0113s] [ 34%] 2024-08-07T18:08:36.3540147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0114s] [ 34%] 2024-08-07T18:08:36.3541424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0117s] [ 34%] 2024-08-07T18:08:36.3542716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0119s] [ 34%] 2024-08-07T18:08:36.3544006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0117s] [ 34%] 2024-08-07T18:08:36.3545291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0140s] [ 34%] 2024-08-07T18:08:36.3546588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0142s] [ 34%] 2024-08-07T18:08:36.3547889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0131s] [ 34%] 2024-08-07T18:08:36.3549174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0128s] [ 34%] 2024-08-07T18:08:36.3550473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0088s] [ 34%] 2024-08-07T18:08:36.3551816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 34%] 2024-08-07T18:08:36.3553128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0087s] [ 34%] 2024-08-07T18:08:36.3554414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 34%] 2024-08-07T18:08:36.3555730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0116s] [ 34%] 2024-08-07T18:08:36.3557096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0116s] [ 34%] 2024-08-07T18:08:36.3558361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0097s] [ 34%] 2024-08-07T18:08:36.3559649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0096s] [ 34%] 2024-08-07T18:08:36.3560917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 34%] 2024-08-07T18:08:36.3562206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 34%] 2024-08-07T18:08:36.3563466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 34%] 2024-08-07T18:08:36.3564742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 34%] 2024-08-07T18:08:36.3566026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 34%] 2024-08-07T18:08:36.3567314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 34%] 2024-08-07T18:08:36.3568596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 35%] 2024-08-07T18:08:36.3569913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 35%] 2024-08-07T18:08:36.3571249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 35%] 2024-08-07T18:08:36.3572508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 35%] 2024-08-07T18:08:36.3573787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 35%] 2024-08-07T18:08:36.3575101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 35%] 2024-08-07T18:08:36.3576437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 35%] 2024-08-07T18:08:36.3577730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 35%] 2024-08-07T18:08:36.3578992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 35%] 2024-08-07T18:08:36.3580288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 35%] 2024-08-07T18:08:36.3581545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 35%] 2024-08-07T18:08:36.3582829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 35%] 2024-08-07T18:08:36.3584092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0074s] [ 35%] 2024-08-07T18:08:36.3585379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 35%] 2024-08-07T18:08:36.3586660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 35%] 2024-08-07T18:08:36.3587995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 35%] 2024-08-07T18:08:36.3589306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 35%] 2024-08-07T18:08:36.3590569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 35%] 2024-08-07T18:08:36.3591837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 35%] 2024-08-07T18:08:36.3593139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 35%] 2024-08-07T18:08:36.3594481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 35%] 2024-08-07T18:08:36.3595995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 35%] 2024-08-07T18:08:36.3597310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 35%] 2024-08-07T18:08:36.3598576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 35%] 2024-08-07T18:08:36.3599867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 35%] 2024-08-07T18:08:36.3601121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 35%] 2024-08-07T18:08:36.3602379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 35%] 2024-08-07T18:08:36.3603670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 35%] 2024-08-07T18:08:36.3604922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 35%] 2024-08-07T18:08:36.3606194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 35%] 2024-08-07T18:08:36.3607543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 35%] 2024-08-07T18:08:36.3608912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 35%] 2024-08-07T18:08:36.3610155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 35%] 2024-08-07T18:08:36.3611434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 35%] 2024-08-07T18:08:36.3612752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 35%] 2024-08-07T18:08:36.3614083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 35%] 2024-08-07T18:08:36.3615351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 35%] 2024-08-07T18:08:36.3616680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 35%] 2024-08-07T18:08:36.3617981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 35%] 2024-08-07T18:08:36.3619230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 35%] 2024-08-07T18:08:36.3620573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 35%] 2024-08-07T18:08:36.3621885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 35%] 2024-08-07T18:08:36.3623168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 35%] 2024-08-07T18:08:36.3624423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 35%] 2024-08-07T18:08:36.3625808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0071s] [ 35%] 2024-08-07T18:08:36.3627180Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 35%] 2024-08-07T18:08:36.3628445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 35%] 2024-08-07T18:08:36.3629728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 35%] 2024-08-07T18:08:36.3631030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 35%] 2024-08-07T18:08:36.3632366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 35%] 2024-08-07T18:08:36.3633633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 35%] 2024-08-07T18:08:36.3634907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 35%] 2024-08-07T18:08:36.3636161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 35%] 2024-08-07T18:08:36.3637467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 35%] 2024-08-07T18:08:36.3638719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 35%] 2024-08-07T18:08:36.3639977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 35%] 2024-08-07T18:08:36.3641257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 35%] 2024-08-07T18:08:36.3642523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 35%] 2024-08-07T18:08:36.3643802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0143s] [ 35%] 2024-08-07T18:08:36.3645193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0149s] [ 35%] 2024-08-07T18:08:36.3646514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0134s] [ 35%] 2024-08-07T18:08:36.3647796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0134s] [ 35%] 2024-08-07T18:08:36.3649079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0191s] [ 35%] 2024-08-07T18:08:36.3650396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0197s] [ 35%] 2024-08-07T18:08:36.3651746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0156s] [ 35%] 2024-08-07T18:08:36.3653006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0155s] [ 35%] 2024-08-07T18:08:36.3654273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0149s] [ 35%] 2024-08-07T18:08:36.3655571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0159s] [ 35%] 2024-08-07T18:08:36.3656844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0155s] [ 35%] 2024-08-07T18:08:36.3658136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0153s] [ 35%] 2024-08-07T18:08:36.3659410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0200s] [ 35%] 2024-08-07T18:08:36.3660712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0206s] [ 35%] 2024-08-07T18:08:36.3661976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0172s] [ 35%] 2024-08-07T18:08:36.3663316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0173s] [ 35%] 2024-08-07T18:08:36.3664627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0173s] [ 35%] 2024-08-07T18:08:36.3665903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0183s] [ 35%] 2024-08-07T18:08:36.3667210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0182s] [ 35%] 2024-08-07T18:08:36.3668540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0178s] [ 35%] 2024-08-07T18:08:36.3669880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0224s] [ 36%] 2024-08-07T18:08:36.3671150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0225s] [ 36%] 2024-08-07T18:08:36.3672430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0199s] [ 36%] 2024-08-07T18:08:36.3673707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0195s] [ 36%] 2024-08-07T18:08:36.3674999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0131s] [ 36%] 2024-08-07T18:08:36.3676266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0140s] [ 36%] 2024-08-07T18:08:36.3677566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0127s] [ 36%] 2024-08-07T18:08:36.3678902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0129s] [ 36%] 2024-08-07T18:08:36.3680150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0188s] [ 36%] 2024-08-07T18:08:36.3681436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0190s] [ 36%] 2024-08-07T18:08:36.3682758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0145s] [ 36%] 2024-08-07T18:08:36.3684109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0144s] [ 36%] 2024-08-07T18:08:36.3685389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 36%] 2024-08-07T18:08:36.3686658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 36%] 2024-08-07T18:08:36.3687971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 36%] 2024-08-07T18:08:36.3689304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 36%] 2024-08-07T18:08:36.3690562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 36%] 2024-08-07T18:08:36.3691835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 36%] 2024-08-07T18:08:36.3693127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 36%] 2024-08-07T18:08:36.3694393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 36%] 2024-08-07T18:08:36.3695955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 36%] 2024-08-07T18:08:36.3697263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 36%] 2024-08-07T18:08:36.3698557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0071s] [ 36%] 2024-08-07T18:08:36.3699820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 36%] 2024-08-07T18:08:36.3701187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 36%] 2024-08-07T18:08:36.3702529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 36%] 2024-08-07T18:08:36.3703813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 36%] 2024-08-07T18:08:36.3705082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 36%] 2024-08-07T18:08:36.3706401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 36%] 2024-08-07T18:08:36.3707785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 36%] 2024-08-07T18:08:36.3709044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 36%] 2024-08-07T18:08:36.3710325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 36%] 2024-08-07T18:08:36.3711591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 36%] 2024-08-07T18:08:36.3712890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 36%] 2024-08-07T18:08:36.3714147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 36%] 2024-08-07T18:08:36.3715440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 36%] 2024-08-07T18:08:36.3716703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 36%] 2024-08-07T18:08:36.3717978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 36%] 2024-08-07T18:08:36.3719251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 36%] 2024-08-07T18:08:36.3720560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 36%] 2024-08-07T18:08:36.3721935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 36%] 2024-08-07T18:08:36.3723196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 36%] 2024-08-07T18:08:36.3724472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 36%] 2024-08-07T18:08:36.3725782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 36%] 2024-08-07T18:08:36.3727109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 36%] 2024-08-07T18:08:36.3728367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 36%] 2024-08-07T18:08:36.3729643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 36%] 2024-08-07T18:08:36.3730917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 36%] 2024-08-07T18:08:36.3732174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 36%] 2024-08-07T18:08:36.3733449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 36%] 2024-08-07T18:08:36.3734709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 36%] 2024-08-07T18:08:36.3736014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 36%] 2024-08-07T18:08:36.3737287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 36%] 2024-08-07T18:08:36.3738611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 36%] 2024-08-07T18:08:36.3739936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 36%] 2024-08-07T18:08:36.3741221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 36%] 2024-08-07T18:08:36.3742478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 36%] 2024-08-07T18:08:36.3743784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 36%] 2024-08-07T18:08:36.3745117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 36%] 2024-08-07T18:08:36.3746383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 36%] 2024-08-07T18:08:36.3747684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 36%] 2024-08-07T18:08:36.3748953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 36%] 2024-08-07T18:08:36.3750235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 36%] 2024-08-07T18:08:36.3751553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 36%] 2024-08-07T18:08:36.3752830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 36%] 2024-08-07T18:08:36.3754103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 36%] 2024-08-07T18:08:36.3755366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 36%] 2024-08-07T18:08:36.3756645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 36%] 2024-08-07T18:08:36.3757958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 36%] 2024-08-07T18:08:36.3759287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 36%] 2024-08-07T18:08:36.3760537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 36%] 2024-08-07T18:08:36.3761810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 36%] 2024-08-07T18:08:36.3763105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 36%] 2024-08-07T18:08:36.3764442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 36%] 2024-08-07T18:08:36.3765695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 36%] 2024-08-07T18:08:36.3766977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 36%] 2024-08-07T18:08:36.3768261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 36%] 2024-08-07T18:08:36.3769529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 37%] 2024-08-07T18:08:36.3770811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 37%] 2024-08-07T18:08:36.3772080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 37%] 2024-08-07T18:08:36.3773371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 37%] 2024-08-07T18:08:36.3774638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 37%] 2024-08-07T18:08:36.3775967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 37%] 2024-08-07T18:08:36.3777307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 37%] 2024-08-07T18:08:36.3778595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 37%] 2024-08-07T18:08:36.3779860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 37%] 2024-08-07T18:08:36.3781172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 37%] 2024-08-07T18:08:36.3782510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 37%] 2024-08-07T18:08:36.3783771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 37%] 2024-08-07T18:08:36.3785060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 37%] 2024-08-07T18:08:36.3786334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 37%] 2024-08-07T18:08:36.3787662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 37%] 2024-08-07T18:08:36.3788925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 37%] 2024-08-07T18:08:36.3790215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 37%] 2024-08-07T18:08:36.3791483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 37%] 2024-08-07T18:08:36.3792772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 37%] 2024-08-07T18:08:36.3794032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 37%] 2024-08-07T18:08:36.3795669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 37%] 2024-08-07T18:08:36.3797069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0081s] [ 37%] 2024-08-07T18:08:36.3798411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 37%] 2024-08-07T18:08:36.3799701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 37%] 2024-08-07T18:08:36.3801032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 37%] 2024-08-07T18:08:36.3802381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 37%] 2024-08-07T18:08:36.3803639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 37%] 2024-08-07T18:08:36.3804923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 37%] 2024-08-07T18:08:36.3806194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 37%] 2024-08-07T18:08:36.3807493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 37%] 2024-08-07T18:08:36.3808783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 37%] 2024-08-07T18:08:36.3810046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 37%] 2024-08-07T18:08:36.3811337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 37%] 2024-08-07T18:08:36.3812594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 37%] 2024-08-07T18:08:36.3813877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 37%] 2024-08-07T18:08:36.3815207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 37%] 2024-08-07T18:08:36.3816553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 37%] 2024-08-07T18:08:36.3817832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 37%] 2024-08-07T18:08:36.3819164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 37%] 2024-08-07T18:08:36.3820479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0082s] [ 37%] 2024-08-07T18:08:36.3821786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 37%] 2024-08-07T18:08:36.3823065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 37%] 2024-08-07T18:08:36.3824333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 37%] 2024-08-07T18:08:36.3825624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0088s] [ 37%] 2024-08-07T18:08:36.3826889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 37%] 2024-08-07T18:08:36.3828238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0080s] [ 37%] 2024-08-07T18:08:36.3829513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 37%] 2024-08-07T18:08:36.3830827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0098s] [ 37%] 2024-08-07T18:08:36.3832093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0100s] [ 37%] 2024-08-07T18:08:36.3833404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 37%] 2024-08-07T18:08:36.3834742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 37%] 2024-08-07T18:08:36.3836003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0109s] [ 37%] 2024-08-07T18:08:36.3837286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 37%] 2024-08-07T18:08:36.3838608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0090s] [ 37%] 2024-08-07T18:08:36.3839954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 37%] 2024-08-07T18:08:36.3841206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 37%] 2024-08-07T18:08:36.3842484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 37%] 2024-08-07T18:08:36.3843745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 37%] 2024-08-07T18:08:36.3845028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 37%] 2024-08-07T18:08:36.3846299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 37%] 2024-08-07T18:08:36.3847563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 37%] 2024-08-07T18:08:36.3848847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 37%] 2024-08-07T18:08:36.3850113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 37%] 2024-08-07T18:08:36.3851381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 37%] 2024-08-07T18:08:36.3852680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 37%] 2024-08-07T18:08:36.3854060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 37%] 2024-08-07T18:08:36.3855315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 37%] 2024-08-07T18:08:36.3856641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 37%] 2024-08-07T18:08:36.3857990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 37%] 2024-08-07T18:08:36.3859236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 37%] 2024-08-07T18:08:36.3860514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 37%] 2024-08-07T18:08:36.3861772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 37%] 2024-08-07T18:08:36.3863059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 37%] 2024-08-07T18:08:36.3864311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 37%] 2024-08-07T18:08:36.3865592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 37%] 2024-08-07T18:08:36.3866854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 37%] 2024-08-07T18:08:36.3868165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 37%] 2024-08-07T18:08:36.3869420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 37%] 2024-08-07T18:08:36.3870727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 38%] 2024-08-07T18:08:36.3872052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 38%] 2024-08-07T18:08:36.3873307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 38%] 2024-08-07T18:08:36.3874573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 38%] 2024-08-07T18:08:36.3875880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 38%] 2024-08-07T18:08:36.3877225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 38%] 2024-08-07T18:08:36.3878505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 38%] 2024-08-07T18:08:36.3879783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 38%] 2024-08-07T18:08:36.3881055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 38%] 2024-08-07T18:08:36.3882354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 38%] 2024-08-07T18:08:36.3883612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 38%] 2024-08-07T18:08:36.3884870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 38%] 2024-08-07T18:08:36.3886159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 38%] 2024-08-07T18:08:36.3887408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 38%] 2024-08-07T18:08:36.3888687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 38%] 2024-08-07T18:08:36.3889994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 38%] 2024-08-07T18:08:36.3891324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_37_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 38%] 2024-08-07T18:08:36.3892576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 38%] 2024-08-07T18:08:36.3893855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 38%] 2024-08-07T18:08:36.3895456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 38%] 2024-08-07T18:08:36.3896803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 38%] 2024-08-07T18:08:36.3898091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 38%] 2024-08-07T18:08:36.3899359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 38%] 2024-08-07T18:08:36.3900666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 38%] 2024-08-07T18:08:36.3901932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 38%] 2024-08-07T18:08:36.3903203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 38%] 2024-08-07T18:08:36.3904469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 38%] 2024-08-07T18:08:36.3905747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 38%] 2024-08-07T18:08:36.3907003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 38%] 2024-08-07T18:08:36.3908368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 38%] 2024-08-07T18:08:36.3909705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 38%] 2024-08-07T18:08:36.3910970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 38%] 2024-08-07T18:08:36.3912256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 38%] 2024-08-07T18:08:36.3913597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 38%] 2024-08-07T18:08:36.3914935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 38%] 2024-08-07T18:08:36.3916188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 38%] 2024-08-07T18:08:36.3917467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 38%] 2024-08-07T18:08:36.3918749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 38%] 2024-08-07T18:08:36.3920044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 38%] 2024-08-07T18:08:36.3921336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 38%] 2024-08-07T18:08:36.3922606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 38%] 2024-08-07T18:08:36.3923886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0070s] [ 38%] 2024-08-07T18:08:36.3925144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 38%] 2024-08-07T18:08:36.3926414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 38%] 2024-08-07T18:08:36.3927758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 38%] 2024-08-07T18:08:36.3929103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 38%] 2024-08-07T18:08:36.3930361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 38%] 2024-08-07T18:08:36.3931631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 38%] 2024-08-07T18:08:36.3932936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 38%] 2024-08-07T18:08:36.3934251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0104s] [ 38%] 2024-08-07T18:08:36.3935534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 38%] 2024-08-07T18:08:36.3936799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0085s] [ 38%] 2024-08-07T18:08:36.3938106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 38%] 2024-08-07T18:08:36.3939376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0123s] [ 38%] 2024-08-07T18:08:36.3940670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0121s] [ 38%] 2024-08-07T18:08:36.3941942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0092s] [ 38%] 2024-08-07T18:08:36.3943244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 38%] 2024-08-07T18:08:36.3944507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0122s] [ 38%] 2024-08-07T18:08:36.3945841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0123s] [ 38%] 2024-08-07T18:08:36.3947157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0095s] [ 38%] 2024-08-07T18:08:36.3948451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 38%] 2024-08-07T18:08:36.3949735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0133s] [ 38%] 2024-08-07T18:08:36.3951063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0135s] [ 38%] 2024-08-07T18:08:36.3952401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0104s] [ 38%] 2024-08-07T18:08:36.3953674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0105s] [ 38%] 2024-08-07T18:08:36.3954953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0146s] [ 38%] 2024-08-07T18:08:36.3956224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0149s] [ 38%] 2024-08-07T18:08:36.3957515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0116s] [ 38%] 2024-08-07T18:08:36.3958804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0121s] [ 38%] 2024-08-07T18:08:36.3963344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0159s] [ 38%] 2024-08-07T18:08:36.3966766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0160s] [ 38%] 2024-08-07T18:08:36.3968069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0126s] [ 38%] 2024-08-07T18:08:36.3969340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0127s] [ 38%] 2024-08-07T18:08:36.3970660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0101s] [ 38%] 2024-08-07T18:08:36.3971957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0103s] [ 38%] 2024-08-07T18:08:36.3973216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 38%] 2024-08-07T18:08:36.3974507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 38%] 2024-08-07T18:08:36.3975847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0110s] [ 39%] 2024-08-07T18:08:36.3977197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0115s] [ 39%] 2024-08-07T18:08:36.3978462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 39%] 2024-08-07T18:08:36.3979756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 39%] 2024-08-07T18:08:36.3981013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 39%] 2024-08-07T18:08:36.3982290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 39%] 2024-08-07T18:08:36.3983552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 39%] 2024-08-07T18:08:36.3984944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 39%] 2024-08-07T18:08:36.3986270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 39%] 2024-08-07T18:08:36.3987535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 39%] 2024-08-07T18:08:36.3988806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 39%] 2024-08-07T18:08:36.3990074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 39%] 2024-08-07T18:08:36.3991359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 39%] 2024-08-07T18:08:36.3992616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 39%] 2024-08-07T18:08:36.3993958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 39%] 2024-08-07T18:08:36.3995803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 39%] 2024-08-07T18:08:36.3997115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 39%] 2024-08-07T18:08:36.3998384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 39%] 2024-08-07T18:08:36.3999656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 39%] 2024-08-07T18:08:36.4000968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 39%] 2024-08-07T18:08:36.4002225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 39%] 2024-08-07T18:08:36.4003582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 39%] 2024-08-07T18:08:36.4004994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 39%] 2024-08-07T18:08:36.4006369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 39%] 2024-08-07T18:08:36.4007632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 39%] 2024-08-07T18:08:36.4008925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 39%] 2024-08-07T18:08:36.4010196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 39%] 2024-08-07T18:08:36.4011501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 39%] 2024-08-07T18:08:36.4012770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 39%] 2024-08-07T18:08:36.4014088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 39%] 2024-08-07T18:08:36.4015440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 39%] 2024-08-07T18:08:36.4016699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 39%] 2024-08-07T18:08:36.4017968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 39%] 2024-08-07T18:08:36.4019229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 39%] 2024-08-07T18:08:36.4020511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 39%] 2024-08-07T18:08:36.4021822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 39%] 2024-08-07T18:08:36.4023147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 39%] 2024-08-07T18:08:36.4024515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 39%] 2024-08-07T18:08:36.4025769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 39%] 2024-08-07T18:08:36.4027103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 39%] 2024-08-07T18:08:36.4028365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 39%] 2024-08-07T18:08:36.4029649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 39%] 2024-08-07T18:08:36.4030903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 39%] 2024-08-07T18:08:36.4032183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 39%] 2024-08-07T18:08:36.4033486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 39%] 2024-08-07T18:08:36.4034837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 39%] 2024-08-07T18:08:36.4036086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 39%] 2024-08-07T18:08:36.4037343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 39%] 2024-08-07T18:08:36.4038627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 39%] 2024-08-07T18:08:36.4039888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 39%] 2024-08-07T18:08:36.4041159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 39%] 2024-08-07T18:08:36.4042467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 39%] 2024-08-07T18:08:36.4043805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 39%] 2024-08-07T18:08:36.4045083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 39%] 2024-08-07T18:08:36.4046356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 39%] 2024-08-07T18:08:36.4047630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 39%] 2024-08-07T18:08:36.4048898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 39%] 2024-08-07T18:08:36.4050161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 39%] 2024-08-07T18:08:36.4051478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 39%] 2024-08-07T18:08:36.4052806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 39%] 2024-08-07T18:08:36.4054057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 39%] 2024-08-07T18:08:36.4055353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 39%] 2024-08-07T18:08:36.4056599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 39%] 2024-08-07T18:08:36.4057873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 39%] 2024-08-07T18:08:36.4059123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 39%] 2024-08-07T18:08:36.4060446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 39%] 2024-08-07T18:08:36.4061755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 39%] 2024-08-07T18:08:36.4063021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 39%] 2024-08-07T18:08:36.4063800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_mask_variants_mask_dim_1_cuda PASSED [0.0038s] [ 39%] 2024-08-07T18:08:36.4064566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_mask_variants_mask_dim_2_cuda PASSED [0.0022s] [ 39%] 2024-08-07T18:08:36.4065348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_mask_variants_mask_dim_3_cuda PASSED [0.0021s] [ 39%] 2024-08-07T18:08:36.4066115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_mask_variants_mask_dim_4_cuda PASSED [0.0021s] [ 39%] 2024-08-07T18:08:36.4067356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 39%] 2024-08-07T18:08:36.4068598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 39%] 2024-08-07T18:08:36.4069867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 39%] 2024-08-07T18:08:36.4071875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 39%] 2024-08-07T18:08:36.4073100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0105s] [ 39%] 2024-08-07T18:08:36.4074349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 40%] 2024-08-07T18:08:36.4075600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 40%] 2024-08-07T18:08:36.4076863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 40%] 2024-08-07T18:08:36.4078077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0278s] [ 40%] 2024-08-07T18:08:36.4079294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 40%] 2024-08-07T18:08:36.4080581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 40%] 2024-08-07T18:08:36.4081869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 40%] 2024-08-07T18:08:36.4083102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0097s] [ 40%] 2024-08-07T18:08:36.4084326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0100s] [ 40%] 2024-08-07T18:08:36.4085594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 40%] 2024-08-07T18:08:36.4086827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 40%] 2024-08-07T18:08:36.4088065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 40%] 2024-08-07T18:08:36.4089338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 40%] 2024-08-07T18:08:36.4090639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 40%] 2024-08-07T18:08:36.4091883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 40%] 2024-08-07T18:08:36.4093114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0107s] [ 40%] 2024-08-07T18:08:36.4094358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 40%] 2024-08-07T18:08:36.4095906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 40%] 2024-08-07T18:08:36.4097173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 40%] 2024-08-07T18:08:36.4098389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 40%] 2024-08-07T18:08:36.4099731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 40%] 2024-08-07T18:08:36.4101030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 40%] 2024-08-07T18:08:36.4102264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 40%] 2024-08-07T18:08:36.4103485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0100s] [ 40%] 2024-08-07T18:08:36.4104715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0101s] [ 40%] 2024-08-07T18:08:36.4105979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 40%] 2024-08-07T18:08:36.4107233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 40%] 2024-08-07T18:08:36.4108538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 40%] 2024-08-07T18:08:36.4109843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0093s] [ 40%] 2024-08-07T18:08:36.4111094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 40%] 2024-08-07T18:08:36.4112308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 40%] 2024-08-07T18:08:36.4113558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0110s] [ 40%] 2024-08-07T18:08:36.4114802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0110s] [ 40%] 2024-08-07T18:08:36.4116050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 40%] 2024-08-07T18:08:36.4117298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 40%] 2024-08-07T18:08:36.4118562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 40%] 2024-08-07T18:08:36.4119867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 40%] 2024-08-07T18:08:36.4121075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 40%] 2024-08-07T18:08:36.4122369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 40%] 2024-08-07T18:08:36.4123601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0105s] [ 40%] 2024-08-07T18:08:36.4124845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 40%] 2024-08-07T18:08:36.4126088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 40%] 2024-08-07T18:08:36.4127359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 40%] 2024-08-07T18:08:36.4128650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0078s] [ 40%] 2024-08-07T18:08:36.4129874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 40%] 2024-08-07T18:08:36.4131106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 40%] 2024-08-07T18:08:36.4132335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 40%] 2024-08-07T18:08:36.4133587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0105s] [ 40%] 2024-08-07T18:08:36.4134820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 40%] 2024-08-07T18:08:36.4136083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 40%] 2024-08-07T18:08:36.4137361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 40%] 2024-08-07T18:08:36.4138637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 40%] 2024-08-07T18:08:36.4139869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 40%] 2024-08-07T18:08:36.4141083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 40%] 2024-08-07T18:08:36.4142329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 40%] 2024-08-07T18:08:36.4143543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0098s] [ 40%] 2024-08-07T18:08:36.4144785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 40%] 2024-08-07T18:08:36.4146080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 40%] 2024-08-07T18:08:36.4147373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 40%] 2024-08-07T18:08:36.4148593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0114s] [ 40%] 2024-08-07T18:08:36.4149821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0122s] [ 40%] 2024-08-07T18:08:36.4151062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 40%] 2024-08-07T18:08:36.4152294Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 40%] 2024-08-07T18:08:36.4153594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0152s] [ 40%] 2024-08-07T18:08:36.4154825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0157s] [ 40%] 2024-08-07T18:08:36.4156149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 40%] 2024-08-07T18:08:36.4157442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 40%] 2024-08-07T18:08:36.4158670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 40%] 2024-08-07T18:08:36.4159895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 40%] 2024-08-07T18:08:36.4161124Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 40%] 2024-08-07T18:08:36.4162390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 40%] 2024-08-07T18:08:36.4163611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0103s] [ 40%] 2024-08-07T18:08:36.4164906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 40%] 2024-08-07T18:08:36.4166216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 40%] 2024-08-07T18:08:36.4167473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 40%] 2024-08-07T18:08:36.4168688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0115s] [ 40%] 2024-08-07T18:08:36.4169930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0126s] [ 40%] 2024-08-07T18:08:36.4171160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 40%] 2024-08-07T18:08:36.4172384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 41%] 2024-08-07T18:08:36.4173623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0154s] [ 41%] 2024-08-07T18:08:36.4174897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0159s] [ 41%] 2024-08-07T18:08:36.4176218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 41%] 2024-08-07T18:08:36.4177450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 41%] 2024-08-07T18:08:36.4178691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 41%] 2024-08-07T18:08:36.4179923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 41%] 2024-08-07T18:08:36.4181155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 41%] 2024-08-07T18:08:36.4182372Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 41%] 2024-08-07T18:08:36.4183632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0104s] [ 41%] 2024-08-07T18:08:36.4184928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 41%] 2024-08-07T18:08:36.4186171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 41%] 2024-08-07T18:08:36.4187417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 41%] 2024-08-07T18:08:36.4188645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0125s] [ 41%] 2024-08-07T18:08:36.4189900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0137s] [ 41%] 2024-08-07T18:08:36.4191117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 41%] 2024-08-07T18:08:36.4192360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 41%] 2024-08-07T18:08:36.4193636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0164s] [ 41%] 2024-08-07T18:08:36.4194945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0167s] [ 41%] 2024-08-07T18:08:36.4196533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 41%] 2024-08-07T18:08:36.4197772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 41%] 2024-08-07T18:08:36.4199015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0093s] [ 41%] 2024-08-07T18:08:36.4200230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0095s] [ 41%] 2024-08-07T18:08:36.4201461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 41%] 2024-08-07T18:08:36.4202765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 41%] 2024-08-07T18:08:36.4204108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0109s] [ 41%] 2024-08-07T18:08:36.4205329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0113s] [ 41%] 2024-08-07T18:08:36.4206571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 41%] 2024-08-07T18:08:36.4207823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 41%] 2024-08-07T18:08:36.4209063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0109s] [ 41%] 2024-08-07T18:08:36.4210324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 41%] 2024-08-07T18:08:36.4211605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 41%] 2024-08-07T18:08:36.4212925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 41%] 2024-08-07T18:08:36.4214157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0151s] [ 41%] 2024-08-07T18:08:36.4215404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0155s] [ 41%] 2024-08-07T18:08:36.4216658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 41%] 2024-08-07T18:08:36.4217904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 41%] 2024-08-07T18:08:36.4219139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 41%] 2024-08-07T18:08:36.4220359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 41%] 2024-08-07T18:08:36.4221739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 41%] 2024-08-07T18:08:36.4223060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 41%] 2024-08-07T18:08:36.4224297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0102s] [ 41%] 2024-08-07T18:08:36.4225522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0103s] [ 41%] 2024-08-07T18:08:36.4226791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 41%] 2024-08-07T18:08:36.4228029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 41%] 2024-08-07T18:08:36.4229251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 41%] 2024-08-07T18:08:36.4230542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 41%] 2024-08-07T18:08:36.4231810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 41%] 2024-08-07T18:08:36.4233059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 41%] 2024-08-07T18:08:36.4234278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 41%] 2024-08-07T18:08:36.4235528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 41%] 2024-08-07T18:08:36.4236777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 41%] 2024-08-07T18:08:36.4238021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 41%] 2024-08-07T18:08:36.4239228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 41%] 2024-08-07T18:08:36.4240506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 41%] 2024-08-07T18:08:36.4241788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 41%] 2024-08-07T18:08:36.4243007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 41%] 2024-08-07T18:08:36.4244238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0080s] [ 41%] 2024-08-07T18:08:36.4245464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 41%] 2024-08-07T18:08:36.4246727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 41%] 2024-08-07T18:08:36.4247944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 41%] 2024-08-07T18:08:36.4249173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 41%] 2024-08-07T18:08:36.4250440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 41%] 2024-08-07T18:08:36.4251731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 41%] 2024-08-07T18:08:36.4252946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 41%] 2024-08-07T18:08:36.4254170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 41%] 2024-08-07T18:08:36.4255421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 41%] 2024-08-07T18:08:36.4256665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 41%] 2024-08-07T18:08:36.4257910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 41%] 2024-08-07T18:08:36.4259166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 41%] 2024-08-07T18:08:36.4260455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 41%] 2024-08-07T18:08:36.4261654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 41%] 2024-08-07T18:08:36.4262912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 41%] 2024-08-07T18:08:36.4264128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 41%] 2024-08-07T18:08:36.4265353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 41%] 2024-08-07T18:08:36.4266598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 41%] 2024-08-07T18:08:36.4267810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 41%] 2024-08-07T18:08:36.4269086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 42%] 2024-08-07T18:08:36.4270362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 42%] 2024-08-07T18:08:36.4271589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 42%] 2024-08-07T18:08:36.4272806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 42%] 2024-08-07T18:08:36.4274048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 42%] 2024-08-07T18:08:36.4275270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 42%] 2024-08-07T18:08:36.4276492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 42%] 2024-08-07T18:08:36.4277776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 42%] 2024-08-07T18:08:36.4279035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0070s] [ 42%] 2024-08-07T18:08:36.4280260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 42%] 2024-08-07T18:08:36.4281464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 42%] 2024-08-07T18:08:36.4282692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 42%] 2024-08-07T18:08:36.4283907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0084s] [ 42%] 2024-08-07T18:08:36.4285146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 42%] 2024-08-07T18:08:36.4286375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 42%] 2024-08-07T18:08:36.4287654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 42%] 2024-08-07T18:08:36.4288928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 42%] 2024-08-07T18:08:36.4290141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 42%] 2024-08-07T18:08:36.4291373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 42%] 2024-08-07T18:08:36.4292597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0054s] [ 42%] 2024-08-07T18:08:36.4293832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 42%] 2024-08-07T18:08:36.4295286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 42%] 2024-08-07T18:08:36.4296644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 42%] 2024-08-07T18:08:36.4297947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 42%] 2024-08-07T18:08:36.4299155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 42%] 2024-08-07T18:08:36.4300385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 42%] 2024-08-07T18:08:36.4301600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 42%] 2024-08-07T18:08:36.4302841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 42%] 2024-08-07T18:08:36.4304048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0080s] [ 42%] 2024-08-07T18:08:36.4305277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 42%] 2024-08-07T18:08:36.4306550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 42%] 2024-08-07T18:08:36.4307874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 42%] 2024-08-07T18:08:36.4309096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0183s] [ 42%] 2024-08-07T18:08:36.4310326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0200s] [ 42%] 2024-08-07T18:08:36.4311576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0089s] [ 42%] 2024-08-07T18:08:36.4312824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 42%] 2024-08-07T18:08:36.4314069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0259s] [ 42%] 2024-08-07T18:08:36.4315348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0267s] [ 42%] 2024-08-07T18:08:36.4316645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 42%] 2024-08-07T18:08:36.4317881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 42%] 2024-08-07T18:08:36.4319114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0103s] [ 42%] 2024-08-07T18:08:36.4320342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0103s] [ 42%] 2024-08-07T18:08:36.4321632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 42%] 2024-08-07T18:08:36.4322877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 42%] 2024-08-07T18:08:36.4324096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0120s] [ 42%] 2024-08-07T18:08:36.4325389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0122s] [ 42%] 2024-08-07T18:08:36.4326671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0091s] [ 42%] 2024-08-07T18:08:36.4327914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 42%] 2024-08-07T18:08:36.4329147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0190s] [ 42%] 2024-08-07T18:08:36.4330410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0207s] [ 42%] 2024-08-07T18:08:36.4331636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0089s] [ 42%] 2024-08-07T18:08:36.4332866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0097s] [ 42%] 2024-08-07T18:08:36.4334157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0264s] [ 42%] 2024-08-07T18:08:36.4335441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0273s] [ 42%] 2024-08-07T18:08:36.4336710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0097s] [ 42%] 2024-08-07T18:08:36.4337943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0098s] [ 42%] 2024-08-07T18:08:36.4339185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0106s] [ 42%] 2024-08-07T18:08:36.4340424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 42%] 2024-08-07T18:08:36.4341656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 42%] 2024-08-07T18:08:36.4342879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 42%] 2024-08-07T18:08:36.4344150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0126s] [ 42%] 2024-08-07T18:08:36.4345454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0124s] [ 42%] 2024-08-07T18:08:36.4346693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0091s] [ 42%] 2024-08-07T18:08:36.4347946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 42%] 2024-08-07T18:08:36.4349178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0214s] [ 42%] 2024-08-07T18:08:36.4350429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0230s] [ 42%] 2024-08-07T18:08:36.4351654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0094s] [ 42%] 2024-08-07T18:08:36.4352948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0097s] [ 42%] 2024-08-07T18:08:36.4354233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0284s] [ 42%] 2024-08-07T18:08:36.4355473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0291s] [ 42%] 2024-08-07T18:08:36.4356749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0102s] [ 42%] 2024-08-07T18:08:36.4357993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0104s] [ 42%] 2024-08-07T18:08:36.4359240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0115s] [ 42%] 2024-08-07T18:08:36.4360462Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0116s] [ 42%] 2024-08-07T18:08:36.4361697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0091s] [ 42%] 2024-08-07T18:08:36.4362967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 42%] 2024-08-07T18:08:36.4364262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0135s] [ 42%] 2024-08-07T18:08:36.4365486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0133s] [ 42%] 2024-08-07T18:08:36.4366721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0097s] [ 43%] 2024-08-07T18:08:36.4367977Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0099s] [ 43%] 2024-08-07T18:08:36.4369200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0180s] [ 43%] 2024-08-07T18:08:36.4370449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0191s] [ 43%] 2024-08-07T18:08:36.4371716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0086s] [ 43%] 2024-08-07T18:08:36.4373016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 43%] 2024-08-07T18:08:36.4374237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0257s] [ 43%] 2024-08-07T18:08:36.4375497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0264s] [ 43%] 2024-08-07T18:08:36.4376730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 43%] 2024-08-07T18:08:36.4377971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 43%] 2024-08-07T18:08:36.4379209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0102s] [ 43%] 2024-08-07T18:08:36.4380426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0103s] [ 43%] 2024-08-07T18:08:36.4381719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 43%] 2024-08-07T18:08:36.4382990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 43%] 2024-08-07T18:08:36.4384224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0116s] [ 43%] 2024-08-07T18:08:36.4385449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0118s] [ 43%] 2024-08-07T18:08:36.4386710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 43%] 2024-08-07T18:08:36.4387940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 43%] 2024-08-07T18:08:36.4389178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 43%] 2024-08-07T18:08:36.4390471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 43%] 2024-08-07T18:08:36.4391739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 43%] 2024-08-07T18:08:36.4392980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 43%] 2024-08-07T18:08:36.4394201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 43%] 2024-08-07T18:08:36.4395716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 43%] 2024-08-07T18:08:36.4396981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 43%] 2024-08-07T18:08:36.4398231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 43%] 2024-08-07T18:08:36.4399440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0070s] [ 43%] 2024-08-07T18:08:36.4400744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 43%] 2024-08-07T18:08:36.4402042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 43%] 2024-08-07T18:08:36.4403256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 43%] 2024-08-07T18:08:36.4404558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0088s] [ 43%] 2024-08-07T18:08:36.4405786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 43%] 2024-08-07T18:08:36.4407018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 43%] 2024-08-07T18:08:36.4408239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 43%] 2024-08-07T18:08:36.4409610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 43%] 2024-08-07T18:08:36.4410946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 43%] 2024-08-07T18:08:36.4412165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 43%] 2024-08-07T18:08:36.4413406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 43%] 2024-08-07T18:08:36.4414628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0086s] [ 43%] 2024-08-07T18:08:36.4415880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 43%] 2024-08-07T18:08:36.4417109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 43%] 2024-08-07T18:08:36.4418358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 43%] 2024-08-07T18:08:36.4419658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 43%] 2024-08-07T18:08:36.4420948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 43%] 2024-08-07T18:08:36.4422202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 43%] 2024-08-07T18:08:36.4423423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 43%] 2024-08-07T18:08:36.4424685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 43%] 2024-08-07T18:08:36.4425884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 43%] 2024-08-07T18:08:36.4427114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 43%] 2024-08-07T18:08:36.4428377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 43%] 2024-08-07T18:08:36.4429674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 43%] 2024-08-07T18:08:36.4430888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 43%] 2024-08-07T18:08:36.4432122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 43%] 2024-08-07T18:08:36.4433351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 43%] 2024-08-07T18:08:36.4434585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0086s] [ 43%] 2024-08-07T18:08:36.4435829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 43%] 2024-08-07T18:08:36.4437049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 43%] 2024-08-07T18:08:36.4438340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 43%] 2024-08-07T18:08:36.4439607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0074s] [ 43%] 2024-08-07T18:08:36.4440837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 43%] 2024-08-07T18:08:36.4442056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 43%] 2024-08-07T18:08:36.4443304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 43%] 2024-08-07T18:08:36.4444537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0091s] [ 43%] 2024-08-07T18:08:36.4445761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 43%] 2024-08-07T18:08:36.4447040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 43%] 2024-08-07T18:08:36.4448338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 43%] 2024-08-07T18:08:36.4449566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 43%] 2024-08-07T18:08:36.4450785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 43%] 2024-08-07T18:08:36.4452022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 43%] 2024-08-07T18:08:36.4453304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 43%] 2024-08-07T18:08:36.4454535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 43%] 2024-08-07T18:08:36.4455758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 43%] 2024-08-07T18:08:36.4457021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 43%] 2024-08-07T18:08:36.4458329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 43%] 2024-08-07T18:08:36.4459535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0070s] [ 43%] 2024-08-07T18:08:36.4460773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 43%] 2024-08-07T18:08:36.4461992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 43%] 2024-08-07T18:08:36.4463233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 44%] 2024-08-07T18:08:36.4464438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0086s] [ 44%] 2024-08-07T18:08:36.4465721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 44%] 2024-08-07T18:08:36.4466982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 44%] 2024-08-07T18:08:36.4468213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 44%] 2024-08-07T18:08:36.4469450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 44%] 2024-08-07T18:08:36.4470668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 44%] 2024-08-07T18:08:36.4471910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 44%] 2024-08-07T18:08:36.4473131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 44%] 2024-08-07T18:08:36.4474361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 44%] 2024-08-07T18:08:36.4475626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 44%] 2024-08-07T18:08:36.4476910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 44%] 2024-08-07T18:08:36.4478145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 44%] 2024-08-07T18:08:36.4479349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 44%] 2024-08-07T18:08:36.4480580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 44%] 2024-08-07T18:08:36.4481788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 44%] 2024-08-07T18:08:36.4483016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 44%] 2024-08-07T18:08:36.4484265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 44%] 2024-08-07T18:08:36.4485550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 44%] 2024-08-07T18:08:36.4486754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 44%] 2024-08-07T18:08:36.4488002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 44%] 2024-08-07T18:08:36.4489216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 44%] 2024-08-07T18:08:36.4490447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 44%] 2024-08-07T18:08:36.4491702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 44%] 2024-08-07T18:08:36.4492925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 44%] 2024-08-07T18:08:36.4494208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 44%] 2024-08-07T18:08:36.4495718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 44%] 2024-08-07T18:08:36.4496971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 44%] 2024-08-07T18:08:36.4498209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 44%] 2024-08-07T18:08:36.4499437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 44%] 2024-08-07T18:08:36.4500652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 44%] 2024-08-07T18:08:36.4501872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 44%] 2024-08-07T18:08:36.4503156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 44%] 2024-08-07T18:08:36.4504443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0084s] [ 44%] 2024-08-07T18:08:36.4505674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 44%] 2024-08-07T18:08:36.4506884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 44%] 2024-08-07T18:08:36.4508140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 44%] 2024-08-07T18:08:36.4509363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 44%] 2024-08-07T18:08:36.4510598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 44%] 2024-08-07T18:08:36.4511807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 44%] 2024-08-07T18:08:36.4513113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 44%] 2024-08-07T18:08:36.4514398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0084s] [ 44%] 2024-08-07T18:08:36.4515646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 44%] 2024-08-07T18:08:36.4516880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 44%] 2024-08-07T18:08:36.4518126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 44%] 2024-08-07T18:08:36.4519363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 44%] 2024-08-07T18:08:36.4520572Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 44%] 2024-08-07T18:08:36.4521947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 44%] 2024-08-07T18:08:36.4523219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 44%] 2024-08-07T18:08:36.4524445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 44%] 2024-08-07T18:08:36.4525662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 44%] 2024-08-07T18:08:36.4526878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 44%] 2024-08-07T18:08:36.4528145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 44%] 2024-08-07T18:08:36.4529357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 44%] 2024-08-07T18:08:36.4530584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 44%] 2024-08-07T18:08:36.4531836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 44%] 2024-08-07T18:08:36.4533122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 44%] 2024-08-07T18:08:36.4534338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 44%] 2024-08-07T18:08:36.4535578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 44%] 2024-08-07T18:08:36.4536796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 44%] 2024-08-07T18:08:36.4538047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 44%] 2024-08-07T18:08:36.4539270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 44%] 2024-08-07T18:08:36.4540483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 44%] 2024-08-07T18:08:36.4541822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 44%] 2024-08-07T18:08:36.4543077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 44%] 2024-08-07T18:08:36.4544300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0079s] [ 44%] 2024-08-07T18:08:36.4545516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 44%] 2024-08-07T18:08:36.4546751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 44%] 2024-08-07T18:08:36.4547989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 44%] 2024-08-07T18:08:36.4549212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0110s] [ 44%] 2024-08-07T18:08:36.4550499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0116s] [ 44%] 2024-08-07T18:08:36.4551778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 44%] 2024-08-07T18:08:36.4553035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 44%] 2024-08-07T18:08:36.4554256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0148s] [ 44%] 2024-08-07T18:08:36.4555507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0152s] [ 44%] 2024-08-07T18:08:36.4556740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 44%] 2024-08-07T18:08:36.4558016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 44%] 2024-08-07T18:08:36.4559239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0105s] [ 45%] 2024-08-07T18:08:36.4560507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0110s] [ 45%] 2024-08-07T18:08:36.4561792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 45%] 2024-08-07T18:08:36.4563013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 45%] 2024-08-07T18:08:36.4564254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0143s] [ 45%] 2024-08-07T18:08:36.4565483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0145s] [ 45%] 2024-08-07T18:08:36.4566718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0080s] [ 45%] 2024-08-07T18:08:36.4567968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 45%] 2024-08-07T18:08:36.4569254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0110s] [ 45%] 2024-08-07T18:08:36.4570530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0119s] [ 45%] 2024-08-07T18:08:36.4571753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 45%] 2024-08-07T18:08:36.4572998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 45%] 2024-08-07T18:08:36.4574225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0148s] [ 45%] 2024-08-07T18:08:36.4575482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0152s] [ 45%] 2024-08-07T18:08:36.4576717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 45%] 2024-08-07T18:08:36.4578028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 45%] 2024-08-07T18:08:36.4579299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0106s] [ 45%] 2024-08-07T18:08:36.4580535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 45%] 2024-08-07T18:08:36.4581750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 45%] 2024-08-07T18:08:36.4582972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 45%] 2024-08-07T18:08:36.4584291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0144s] [ 45%] 2024-08-07T18:08:36.4585518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0148s] [ 45%] 2024-08-07T18:08:36.4586763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 45%] 2024-08-07T18:08:36.4588064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 45%] 2024-08-07T18:08:36.4589382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0118s] [ 45%] 2024-08-07T18:08:36.4590610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0129s] [ 45%] 2024-08-07T18:08:36.4591847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 45%] 2024-08-07T18:08:36.4593086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 45%] 2024-08-07T18:08:36.4594320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0157s] [ 45%] 2024-08-07T18:08:36.4595817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0160s] [ 45%] 2024-08-07T18:08:36.4597135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 45%] 2024-08-07T18:08:36.4598481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 45%] 2024-08-07T18:08:36.4599693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0115s] [ 45%] 2024-08-07T18:08:36.4600926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0124s] [ 45%] 2024-08-07T18:08:36.4602142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 45%] 2024-08-07T18:08:36.4603392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 45%] 2024-08-07T18:08:36.4604610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0153s] [ 45%] 2024-08-07T18:08:36.4605884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0153s] [ 45%] 2024-08-07T18:08:36.4607182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0080s] [ 45%] 2024-08-07T18:08:36.4608497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 45%] 2024-08-07T18:08:36.4609772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0105s] [ 45%] 2024-08-07T18:08:36.4610990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 45%] 2024-08-07T18:08:36.4612233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 45%] 2024-08-07T18:08:36.4613468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 45%] 2024-08-07T18:08:36.4614708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0146s] [ 45%] 2024-08-07T18:08:36.4615985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0149s] [ 45%] 2024-08-07T18:08:36.4617263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 45%] 2024-08-07T18:08:36.4618532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 45%] 2024-08-07T18:08:36.4619745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0102s] [ 45%] 2024-08-07T18:08:36.4620988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 45%] 2024-08-07T18:08:36.4622251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 45%] 2024-08-07T18:08:36.4623490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 45%] 2024-08-07T18:08:36.4624701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0141s] [ 45%] 2024-08-07T18:08:36.4625986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0144s] [ 45%] 2024-08-07T18:08:36.4627255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 45%] 2024-08-07T18:08:36.4628500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 45%] 2024-08-07T18:08:36.4629740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0168s] [ 45%] 2024-08-07T18:08:36.4630973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0179s] [ 45%] 2024-08-07T18:08:36.4632224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 45%] 2024-08-07T18:08:36.4633448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 45%] 2024-08-07T18:08:36.4634732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0242s] [ 45%] 2024-08-07T18:08:36.4636014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0247s] [ 45%] 2024-08-07T18:08:36.4637258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0090s] [ 45%] 2024-08-07T18:08:36.4638514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 45%] 2024-08-07T18:08:36.4639734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0138s] [ 45%] 2024-08-07T18:08:36.4640983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0143s] [ 45%] 2024-08-07T18:08:36.4642196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0077s] [ 45%] 2024-08-07T18:08:36.4643437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 45%] 2024-08-07T18:08:36.4644702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0190s] [ 45%] 2024-08-07T18:08:36.4645998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0192s] [ 45%] 2024-08-07T18:08:36.4647214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 45%] 2024-08-07T18:08:36.4648479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 45%] 2024-08-07T18:08:36.4649710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0174s] [ 45%] 2024-08-07T18:08:36.4650947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0187s] [ 45%] 2024-08-07T18:08:36.4652191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 45%] 2024-08-07T18:08:36.4653461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 45%] 2024-08-07T18:08:36.4654761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0244s] [ 45%] 2024-08-07T18:08:36.4655994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0250s] [ 45%] 2024-08-07T18:08:36.4657238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0093s] [ 46%] 2024-08-07T18:08:36.4658500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 46%] 2024-08-07T18:08:36.4659757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0143s] [ 46%] 2024-08-07T18:08:36.4660964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0149s] [ 46%] 2024-08-07T18:08:36.4662178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0080s] [ 46%] 2024-08-07T18:08:36.4663472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 46%] 2024-08-07T18:08:36.4664747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0197s] [ 46%] 2024-08-07T18:08:36.4665986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0196s] [ 46%] 2024-08-07T18:08:36.4667205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0091s] [ 46%] 2024-08-07T18:08:36.4668481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 46%] 2024-08-07T18:08:36.4669713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0192s] [ 46%] 2024-08-07T18:08:36.4670961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0211s] [ 46%] 2024-08-07T18:08:36.4672225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0090s] [ 46%] 2024-08-07T18:08:36.4673506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 46%] 2024-08-07T18:08:36.4674751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0261s] [ 46%] 2024-08-07T18:08:36.4675983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0264s] [ 46%] 2024-08-07T18:08:36.4677233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0097s] [ 46%] 2024-08-07T18:08:36.4678498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0098s] [ 46%] 2024-08-07T18:08:36.4679726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0156s] [ 46%] 2024-08-07T18:08:36.4680945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0165s] [ 46%] 2024-08-07T18:08:36.4682234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0086s] [ 46%] 2024-08-07T18:08:36.4683500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 46%] 2024-08-07T18:08:36.4684739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0207s] [ 46%] 2024-08-07T18:08:36.4685967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0208s] [ 46%] 2024-08-07T18:08:36.4687194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0096s] [ 46%] 2024-08-07T18:08:36.4688460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 46%] 2024-08-07T18:08:36.4689676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0161s] [ 46%] 2024-08-07T18:08:36.4690964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0170s] [ 46%] 2024-08-07T18:08:36.4692233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 46%] 2024-08-07T18:08:36.4693474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 46%] 2024-08-07T18:08:36.4694699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0238s] [ 46%] 2024-08-07T18:08:36.4696196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0242s] [ 46%] 2024-08-07T18:08:36.4697443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 46%] 2024-08-07T18:08:36.4698694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 46%] 2024-08-07T18:08:36.4699928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0135s] [ 46%] 2024-08-07T18:08:36.4701226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0137s] [ 46%] 2024-08-07T18:08:36.4702535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 46%] 2024-08-07T18:08:36.4703750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 46%] 2024-08-07T18:08:36.4704988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0186s] [ 46%] 2024-08-07T18:08:36.4706214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0187s] [ 46%] 2024-08-07T18:08:36.4707449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 46%] 2024-08-07T18:08:36.4708692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 46%] 2024-08-07T18:08:36.4709971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0079s] [ 46%] 2024-08-07T18:08:36.4711281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 46%] 2024-08-07T18:08:36.4712494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 46%] 2024-08-07T18:08:36.4713726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 46%] 2024-08-07T18:08:36.4714941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0100s] [ 46%] 2024-08-07T18:08:36.4716188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 46%] 2024-08-07T18:08:36.4717402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 46%] 2024-08-07T18:08:36.4718665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 46%] 2024-08-07T18:08:36.4719906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0079s] [ 46%] 2024-08-07T18:08:36.4727856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 46%] 2024-08-07T18:08:36.4729213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 46%] 2024-08-07T18:08:36.4730473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 46%] 2024-08-07T18:08:36.4731701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0104s] [ 46%] 2024-08-07T18:08:36.4732936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0104s] [ 46%] 2024-08-07T18:08:36.4734154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 46%] 2024-08-07T18:08:36.4735498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 46%] 2024-08-07T18:08:36.4736823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 46%] 2024-08-07T18:08:36.4738038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 46%] 2024-08-07T18:08:36.4739263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 46%] 2024-08-07T18:08:36.4740487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 46%] 2024-08-07T18:08:36.4741735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0104s] [ 46%] 2024-08-07T18:08:36.4742958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0103s] [ 46%] 2024-08-07T18:08:36.4744170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 46%] 2024-08-07T18:08:36.4745481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 46%] 2024-08-07T18:08:36.4746741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0082s] [ 46%] 2024-08-07T18:08:36.4747970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 46%] 2024-08-07T18:08:36.4749175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 46%] 2024-08-07T18:08:36.4750424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 46%] 2024-08-07T18:08:36.4751624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0105s] [ 46%] 2024-08-07T18:08:36.4752907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0109s] [ 46%] 2024-08-07T18:08:36.4754195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 46%] 2024-08-07T18:08:36.4755493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 46%] 2024-08-07T18:08:36.4756713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 46%] 2024-08-07T18:08:36.4757927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 46%] 2024-08-07T18:08:36.4759166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 46%] 2024-08-07T18:08:36.4760388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 47%] 2024-08-07T18:08:36.4761616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0110s] [ 47%] 2024-08-07T18:08:36.4762831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0111s] [ 47%] 2024-08-07T18:08:36.4764112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 47%] 2024-08-07T18:08:36.4765403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 47%] 2024-08-07T18:08:36.4766607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0088s] [ 47%] 2024-08-07T18:08:36.4767832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 47%] 2024-08-07T18:08:36.4769037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 47%] 2024-08-07T18:08:36.4770274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 47%] 2024-08-07T18:08:36.4771481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0111s] [ 47%] 2024-08-07T18:08:36.4772759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 47%] 2024-08-07T18:08:36.4774016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 47%] 2024-08-07T18:08:36.4775263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 47%] 2024-08-07T18:08:36.4776471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 47%] 2024-08-07T18:08:36.4777691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 47%] 2024-08-07T18:08:36.4778926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 47%] 2024-08-07T18:08:36.4780140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 47%] 2024-08-07T18:08:36.4781366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0099s] [ 47%] 2024-08-07T18:08:36.4782630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0101s] [ 47%] 2024-08-07T18:08:36.4783912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 47%] 2024-08-07T18:08:36.4785140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 47%] 2024-08-07T18:08:36.4786359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0079s] [ 47%] 2024-08-07T18:08:36.4787566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 47%] 2024-08-07T18:08:36.4788773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 47%] 2024-08-07T18:08:36.4790008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 47%] 2024-08-07T18:08:36.4791201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0101s] [ 47%] 2024-08-07T18:08:36.4792480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0104s] [ 47%] 2024-08-07T18:08:36.4793734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 47%] 2024-08-07T18:08:36.4794966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 47%] 2024-08-07T18:08:36.4796529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0283s] [ 47%] 2024-08-07T18:08:36.4797783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0313s] [ 47%] 2024-08-07T18:08:36.4799003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0106s] [ 47%] 2024-08-07T18:08:36.4800230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0106s] [ 47%] 2024-08-07T18:08:36.4801577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0431s] [ 47%] 2024-08-07T18:08:36.4802884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0448s] [ 47%] 2024-08-07T18:08:36.4804126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0119s] [ 47%] 2024-08-07T18:08:36.4805361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0116s] [ 47%] 2024-08-07T18:08:36.4806598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0158s] [ 47%] 2024-08-07T18:08:36.4807828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0163s] [ 47%] 2024-08-07T18:08:36.4809055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0098s] [ 47%] 2024-08-07T18:08:36.4810273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0099s] [ 47%] 2024-08-07T18:08:36.4811570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0213s] [ 47%] 2024-08-07T18:08:36.4812877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0216s] [ 47%] 2024-08-07T18:08:36.4814092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0110s] [ 47%] 2024-08-07T18:08:36.4815345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0106s] [ 47%] 2024-08-07T18:08:36.4816577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0296s] [ 47%] 2024-08-07T18:08:36.4817819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0333s] [ 47%] 2024-08-07T18:08:36.4819035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0111s] [ 47%] 2024-08-07T18:08:36.4820316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0110s] [ 47%] 2024-08-07T18:08:36.4821679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0440s] [ 47%] 2024-08-07T18:08:36.4822916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0454s] [ 47%] 2024-08-07T18:08:36.4824170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0124s] [ 47%] 2024-08-07T18:08:36.4825422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0121s] [ 47%] 2024-08-07T18:08:36.4826656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0164s] [ 47%] 2024-08-07T18:08:36.4827873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0171s] [ 47%] 2024-08-07T18:08:36.4829154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0104s] [ 47%] 2024-08-07T18:08:36.4830449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0102s] [ 47%] 2024-08-07T18:08:36.4831721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0219s] [ 47%] 2024-08-07T18:08:36.4832990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0219s] [ 47%] 2024-08-07T18:08:36.4834268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0113s] [ 47%] 2024-08-07T18:08:36.4835810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0114s] [ 47%] 2024-08-07T18:08:36.4837071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0333s] [ 47%] 2024-08-07T18:08:36.4838297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0371s] [ 47%] 2024-08-07T18:08:36.4839571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0123s] [ 47%] 2024-08-07T18:08:36.4840872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0120s] [ 47%] 2024-08-07T18:08:36.4842096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0467s] [ 47%] 2024-08-07T18:08:36.4843344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0482s] [ 47%] 2024-08-07T18:08:36.4844571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0136s] [ 47%] 2024-08-07T18:08:36.4845839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0133s] [ 47%] 2024-08-07T18:08:36.4847051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0182s] [ 47%] 2024-08-07T18:08:36.4848336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0191s] [ 47%] 2024-08-07T18:08:36.4849600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0111s] [ 47%] 2024-08-07T18:08:36.4850821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0111s] [ 47%] 2024-08-07T18:08:36.4852054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0236s] [ 47%] 2024-08-07T18:08:36.4853282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0237s] [ 47%] 2024-08-07T18:08:36.4854530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0124s] [ 47%] 2024-08-07T18:08:36.4855766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0122s] [ 47%] 2024-08-07T18:08:36.4857001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0276s] [ 47%] 2024-08-07T18:08:36.4858276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0300s] [ 48%] 2024-08-07T18:08:36.4859557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0106s] [ 48%] 2024-08-07T18:08:36.4860778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0103s] [ 48%] 2024-08-07T18:08:36.4861999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0423s] [ 48%] 2024-08-07T18:08:36.4863248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0439s] [ 48%] 2024-08-07T18:08:36.4864483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0118s] [ 48%] 2024-08-07T18:08:36.4865737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0118s] [ 48%] 2024-08-07T18:08:36.4866993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0155s] [ 48%] 2024-08-07T18:08:36.4868283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0157s] [ 48%] 2024-08-07T18:08:36.4869494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0096s] [ 48%] 2024-08-07T18:08:36.4870729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0097s] [ 48%] 2024-08-07T18:08:36.4871947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0208s] [ 48%] 2024-08-07T18:08:36.4873174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0209s] [ 48%] 2024-08-07T18:08:36.4874409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0109s] [ 48%] 2024-08-07T18:08:36.4875635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0107s] [ 48%] 2024-08-07T18:08:36.4876919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0086s] [ 48%] 2024-08-07T18:08:36.4878211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 48%] 2024-08-07T18:08:36.4879440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 48%] 2024-08-07T18:08:36.4880654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 48%] 2024-08-07T18:08:36.4881897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0113s] [ 48%] 2024-08-07T18:08:36.4883130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0109s] [ 48%] 2024-08-07T18:08:36.4884349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 48%] 2024-08-07T18:08:36.4885648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 48%] 2024-08-07T18:08:36.4886920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0089s] [ 48%] 2024-08-07T18:08:36.4888157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 48%] 2024-08-07T18:08:36.4889361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 48%] 2024-08-07T18:08:36.4890599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 48%] 2024-08-07T18:08:36.4891833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0113s] [ 48%] 2024-08-07T18:08:36.4893055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 48%] 2024-08-07T18:08:36.4894261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 48%] 2024-08-07T18:08:36.4895821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 48%] 2024-08-07T18:08:36.4897196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0085s] [ 48%] 2024-08-07T18:08:36.4898415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 48%] 2024-08-07T18:08:36.4899644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 48%] 2024-08-07T18:08:36.4900869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 48%] 2024-08-07T18:08:36.4902114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0113s] [ 48%] 2024-08-07T18:08:36.4903337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0111s] [ 48%] 2024-08-07T18:08:36.4904635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 48%] 2024-08-07T18:08:36.4905940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 48%] 2024-08-07T18:08:36.4907145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 48%] 2024-08-07T18:08:36.4908377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 48%] 2024-08-07T18:08:36.4909589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 48%] 2024-08-07T18:08:36.4910828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 48%] 2024-08-07T18:08:36.4912043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0113s] [ 48%] 2024-08-07T18:08:36.4913282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 48%] 2024-08-07T18:08:36.4914544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 48%] 2024-08-07T18:08:36.4915844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 48%] 2024-08-07T18:08:36.4917068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 48%] 2024-08-07T18:08:36.4918303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 48%] 2024-08-07T18:08:36.4919523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 48%] 2024-08-07T18:08:36.4920781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 48%] 2024-08-07T18:08:36.4922046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0114s] [ 48%] 2024-08-07T18:08:36.4923330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 48%] 2024-08-07T18:08:36.4924622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 48%] 2024-08-07T18:08:36.4925854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 48%] 2024-08-07T18:08:36.4927080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0091s] [ 48%] 2024-08-07T18:08:36.4928298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0094s] [ 48%] 2024-08-07T18:08:36.4929536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0071s] [ 48%] 2024-08-07T18:08:36.4930766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 48%] 2024-08-07T18:08:36.4931964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0116s] [ 48%] 2024-08-07T18:08:36.4933244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0117s] [ 48%] 2024-08-07T18:08:36.4934512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 48%] 2024-08-07T18:08:36.4935750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 48%] 2024-08-07T18:08:36.4936964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 48%] 2024-08-07T18:08:36.4938374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 48%] 2024-08-07T18:08:36.4939609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 48%] 2024-08-07T18:08:36.4940881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 48%] 2024-08-07T18:08:36.4942157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0108s] [ 48%] 2024-08-07T18:08:36.4943439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0110s] [ 48%] 2024-08-07T18:08:36.4944683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 48%] 2024-08-07T18:08:36.4945930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 48%] 2024-08-07T18:08:36.4947166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0084s] [ 48%] 2024-08-07T18:08:36.4948539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 48%] 2024-08-07T18:08:36.4950226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 48%] 2024-08-07T18:08:36.4951452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 48%] 2024-08-07T18:08:36.4952759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0112s] [ 48%] 2024-08-07T18:08:36.4954082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0113s] [ 48%] 2024-08-07T18:08:36.4955301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 49%] 2024-08-07T18:08:36.4956530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 49%] 2024-08-07T18:08:36.4957768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 49%] 2024-08-07T18:08:36.4959102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 49%] 2024-08-07T18:08:36.4960656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 49%] 2024-08-07T18:08:36.4962320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 49%] 2024-08-07T18:08:36.4963611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0104s] [ 49%] 2024-08-07T18:08:36.4964863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 49%] 2024-08-07T18:08:36.4966072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 49%] 2024-08-07T18:08:36.4967295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 49%] 2024-08-07T18:08:36.4968545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0081s] [ 49%] 2024-08-07T18:08:36.4969743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 49%] 2024-08-07T18:08:36.4971119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 49%] 2024-08-07T18:08:36.4972951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 49%] 2024-08-07T18:08:36.4974927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0107s] [ 49%] 2024-08-07T18:08:36.4976228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 49%] 2024-08-07T18:08:36.4977457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 49%] 2024-08-07T18:08:36.4978673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 49%] 2024-08-07T18:08:36.4979909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0079s] [ 49%] 2024-08-07T18:08:36.4981141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 49%] 2024-08-07T18:08:36.4982551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 49%] 2024-08-07T18:08:36.4984122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 49%] 2024-08-07T18:08:36.4985943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0109s] [ 49%] 2024-08-07T18:08:36.4987720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 49%] 2024-08-07T18:08:36.4989410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 49%] 2024-08-07T18:08:36.4990673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 49%] 2024-08-07T18:08:36.4991891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0082s] [ 49%] 2024-08-07T18:08:36.4993106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 49%] 2024-08-07T18:08:36.4994615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 49%] 2024-08-07T18:08:36.4996487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 49%] 2024-08-07T18:08:36.4998200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0108s] [ 49%] 2024-08-07T18:08:36.5000051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0109s] [ 49%] 2024-08-07T18:08:36.5001631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 49%] 2024-08-07T18:08:36.5003553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 49%] 2024-08-07T18:08:36.5005533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0086s] [ 49%] 2024-08-07T18:08:36.5006783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 49%] 2024-08-07T18:08:36.5008426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 49%] 2024-08-07T18:08:36.5009796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 49%] 2024-08-07T18:08:36.5011591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0112s] [ 49%] 2024-08-07T18:08:36.5013316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0113s] [ 49%] 2024-08-07T18:08:36.5014571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 49%] 2024-08-07T18:08:36.5016146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 49%] 2024-08-07T18:08:36.5017378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0090s] [ 49%] 2024-08-07T18:08:36.5018687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 49%] 2024-08-07T18:08:36.5019968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 49%] 2024-08-07T18:08:36.5021177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 49%] 2024-08-07T18:08:36.5022443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0113s] [ 49%] 2024-08-07T18:08:36.5023670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0113s] [ 49%] 2024-08-07T18:08:36.5024907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 49%] 2024-08-07T18:08:36.5026117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 49%] 2024-08-07T18:08:36.5027362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 49%] 2024-08-07T18:08:36.5028627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 49%] 2024-08-07T18:08:36.5029925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 49%] 2024-08-07T18:08:36.5031139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 49%] 2024-08-07T18:08:36.5032358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0100s] [ 49%] 2024-08-07T18:08:36.5033603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 49%] 2024-08-07T18:08:36.5034815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 49%] 2024-08-07T18:08:36.5036046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 49%] 2024-08-07T18:08:36.5037314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 49%] 2024-08-07T18:08:36.5039230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 49%] 2024-08-07T18:08:36.5040433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 49%] 2024-08-07T18:08:36.5041660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 49%] 2024-08-07T18:08:36.5042875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0105s] [ 49%] 2024-08-07T18:08:36.5044094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 49%] 2024-08-07T18:08:36.5045316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 49%] 2024-08-07T18:08:36.5046522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 49%] 2024-08-07T18:08:36.5047830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 49%] 2024-08-07T18:08:36.5049109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 49%] 2024-08-07T18:08:36.5050336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 49%] 2024-08-07T18:08:36.5051558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0054s] [ 49%] 2024-08-07T18:08:36.5052802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 49%] 2024-08-07T18:08:36.5054088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 49%] 2024-08-07T18:08:36.5055310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 49%] 2024-08-07T18:08:36.5056607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 49%] 2024-08-07T18:08:36.5057889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 49%] 2024-08-07T18:08:36.5059117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 49%] 2024-08-07T18:08:36.5060316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 49%] 2024-08-07T18:08:36.5061552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 50%] 2024-08-07T18:08:36.5062770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5064003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5065209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 50%] 2024-08-07T18:08:36.5066470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 50%] 2024-08-07T18:08:36.5067774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0055s] [ 50%] 2024-08-07T18:08:36.5068993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 50%] 2024-08-07T18:08:36.5070223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 50%] 2024-08-07T18:08:36.5071445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 50%] 2024-08-07T18:08:36.5072685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 50%] 2024-08-07T18:08:36.5073905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 50%] 2024-08-07T18:08:36.5075177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 50%] 2024-08-07T18:08:36.5076447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 50%] 2024-08-07T18:08:36.5077676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 50%] 2024-08-07T18:08:36.5078904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 50%] 2024-08-07T18:08:36.5080110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 50%] 2024-08-07T18:08:36.5081341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 50%] 2024-08-07T18:08:36.5082548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 50%] 2024-08-07T18:08:36.5083774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5085030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5086316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 50%] 2024-08-07T18:08:36.5087544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 50%] 2024-08-07T18:08:36.5088755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 50%] 2024-08-07T18:08:36.5090000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 50%] 2024-08-07T18:08:36.5091215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 50%] 2024-08-07T18:08:36.5092449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 50%] 2024-08-07T18:08:36.5093711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5095253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0061s] [ 50%] 2024-08-07T18:08:36.5096515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 50%] 2024-08-07T18:08:36.5097760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 50%] 2024-08-07T18:08:36.5098978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 50%] 2024-08-07T18:08:36.5100197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 50%] 2024-08-07T18:08:36.5101423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 50%] 2024-08-07T18:08:36.5102631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5103975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5105259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5106504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 50%] 2024-08-07T18:08:36.5107717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 50%] 2024-08-07T18:08:36.5108956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 50%] 2024-08-07T18:08:36.5110179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 50%] 2024-08-07T18:08:36.5111392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 50%] 2024-08-07T18:08:36.5112688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 50%] 2024-08-07T18:08:36.5113984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 50%] 2024-08-07T18:08:36.5115215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 50%] 2024-08-07T18:08:36.5116431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 50%] 2024-08-07T18:08:36.5117673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 50%] 2024-08-07T18:08:36.5118883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 50%] 2024-08-07T18:08:36.5120108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 50%] 2024-08-07T18:08:36.5121310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 50%] 2024-08-07T18:08:36.5122606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 50%] 2024-08-07T18:08:36.5123891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 50%] 2024-08-07T18:08:36.5125095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 50%] 2024-08-07T18:08:36.5126315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 50%] 2024-08-07T18:08:36.5127545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 50%] 2024-08-07T18:08:36.5128798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 50%] 2024-08-07T18:08:36.5130007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 50%] 2024-08-07T18:08:36.5131241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 50%] 2024-08-07T18:08:36.5132541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 50%] 2024-08-07T18:08:36.5133817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 50%] 2024-08-07T18:08:36.5135041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 50%] 2024-08-07T18:08:36.5136274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 50%] 2024-08-07T18:08:36.5137532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0056s] [ 50%] 2024-08-07T18:08:36.5138744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 50%] 2024-08-07T18:08:36.5139966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 50%] 2024-08-07T18:08:36.5141245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 50%] 2024-08-07T18:08:36.5142540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5143754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5144961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 50%] 2024-08-07T18:08:36.5146199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 50%] 2024-08-07T18:08:36.5147425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 50%] 2024-08-07T18:08:36.5148661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 50%] 2024-08-07T18:08:36.5149873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 50%] 2024-08-07T18:08:36.5151156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 50%] 2024-08-07T18:08:36.5152433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 50%] 2024-08-07T18:08:36.5153679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 50%] 2024-08-07T18:08:36.5154903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0062s] [ 50%] 2024-08-07T18:08:36.5156132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 50%] 2024-08-07T18:08:36.5157360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 50%] 2024-08-07T18:08:36.5158571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 51%] 2024-08-07T18:08:36.5159836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 51%] 2024-08-07T18:08:36.5161096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 51%] 2024-08-07T18:08:36.5162336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 51%] 2024-08-07T18:08:36.5163555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 51%] 2024-08-07T18:08:36.5164787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 51%] 2024-08-07T18:08:36.5166013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 51%] 2024-08-07T18:08:36.5167236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0075s] [ 51%] 2024-08-07T18:08:36.5168478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 51%] 2024-08-07T18:08:36.5169739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 51%] 2024-08-07T18:08:36.5171041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0054s] [ 51%] 2024-08-07T18:08:36.5172259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 51%] 2024-08-07T18:08:36.5173509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 51%] 2024-08-07T18:08:36.5174738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 51%] 2024-08-07T18:08:36.5175983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 51%] 2024-08-07T18:08:36.5177191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 51%] 2024-08-07T18:08:36.5178469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 51%] 2024-08-07T18:08:36.5179751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 51%] 2024-08-07T18:08:36.5180963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 51%] 2024-08-07T18:08:36.5182200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 51%] 2024-08-07T18:08:36.5183426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 51%] 2024-08-07T18:08:36.5184665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 51%] 2024-08-07T18:08:36.5185881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 51%] 2024-08-07T18:08:36.5187112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 51%] 2024-08-07T18:08:36.5188388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 51%] 2024-08-07T18:08:36.5189653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 51%] 2024-08-07T18:08:36.5190880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 51%] 2024-08-07T18:08:36.5192099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 51%] 2024-08-07T18:08:36.5193341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 51%] 2024-08-07T18:08:36.5194559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 51%] 2024-08-07T18:08:36.5196043Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 51%] 2024-08-07T18:08:36.5197363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 51%] 2024-08-07T18:08:36.5198682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 51%] 2024-08-07T18:08:36.5199887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 51%] 2024-08-07T18:08:36.5201259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 51%] 2024-08-07T18:08:36.5202503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 51%] 2024-08-07T18:08:36.5203722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 51%] 2024-08-07T18:08:36.5204968Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 51%] 2024-08-07T18:08:36.5206229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 51%] 2024-08-07T18:08:36.5207610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 51%] 2024-08-07T18:08:36.5209088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 51%] 2024-08-07T18:08:36.5210416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 51%] 2024-08-07T18:08:36.5211619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 51%] 2024-08-07T18:08:36.5212830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 51%] 2024-08-07T18:08:36.5214067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 51%] 2024-08-07T18:08:36.5215273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0062s] [ 51%] 2024-08-07T18:08:36.5216558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 51%] 2024-08-07T18:08:36.5217811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 51%] 2024-08-07T18:08:36.5219050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 51%] 2024-08-07T18:08:36.5220240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 51%] 2024-08-07T18:08:36.5221467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 51%] 2024-08-07T18:08:36.5222716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 51%] 2024-08-07T18:08:36.5223921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 51%] 2024-08-07T18:08:36.5225139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 51%] 2024-08-07T18:08:36.5226391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 51%] 2024-08-07T18:08:36.5227667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 51%] 2024-08-07T18:08:36.5228903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 51%] 2024-08-07T18:08:36.5230125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 51%] 2024-08-07T18:08:36.5231336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0054s] [ 51%] 2024-08-07T18:08:36.5232571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 51%] 2024-08-07T18:08:36.5233781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 51%] 2024-08-07T18:08:36.5234980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 51%] 2024-08-07T18:08:36.5236251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 51%] 2024-08-07T18:08:36.5237508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 51%] 2024-08-07T18:08:36.5238750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 51%] 2024-08-07T18:08:36.5239951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 51%] 2024-08-07T18:08:36.5241178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 51%] 2024-08-07T18:08:36.5242376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 51%] 2024-08-07T18:08:36.5243596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 51%] 2024-08-07T18:08:36.5244859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 51%] 2024-08-07T18:08:36.5246126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 51%] 2024-08-07T18:08:36.5247342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 51%] 2024-08-07T18:08:36.5248564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 51%] 2024-08-07T18:08:36.5249786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 51%] 2024-08-07T18:08:36.5251003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 51%] 2024-08-07T18:08:36.5252222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 51%] 2024-08-07T18:08:36.5253428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 51%] 2024-08-07T18:08:36.5254756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 52%] 2024-08-07T18:08:36.5256025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 52%] 2024-08-07T18:08:36.5257221Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 52%] 2024-08-07T18:08:36.5258442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 52%] 2024-08-07T18:08:36.5259646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 52%] 2024-08-07T18:08:36.5260873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 52%] 2024-08-07T18:08:36.5262069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 52%] 2024-08-07T18:08:36.5263346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 52%] 2024-08-07T18:08:36.5264590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 52%] 2024-08-07T18:08:36.5265817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 52%] 2024-08-07T18:08:36.5267018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 52%] 2024-08-07T18:08:36.5268230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 52%] 2024-08-07T18:08:36.5269459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 52%] 2024-08-07T18:08:36.5270663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 52%] 2024-08-07T18:08:36.5271887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 52%] 2024-08-07T18:08:36.5273141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 52%] 2024-08-07T18:08:36.5274421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 52%] 2024-08-07T18:08:36.5275629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 52%] 2024-08-07T18:08:36.5276860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 52%] 2024-08-07T18:08:36.5278072Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 52%] 2024-08-07T18:08:36.5279302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 52%] 2024-08-07T18:08:36.5280520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 52%] 2024-08-07T18:08:36.5281724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 52%] 2024-08-07T18:08:36.5283008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 52%] 2024-08-07T18:08:36.5284272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 52%] 2024-08-07T18:08:36.5285483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 52%] 2024-08-07T18:08:36.5286702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 52%] 2024-08-07T18:08:36.5287934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 52%] 2024-08-07T18:08:36.5289185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 52%] 2024-08-07T18:08:36.5290404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 52%] 2024-08-07T18:08:36.5291693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0099s] [ 52%] 2024-08-07T18:08:36.5292979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 52%] 2024-08-07T18:08:36.5294217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 52%] 2024-08-07T18:08:36.5295723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 52%] 2024-08-07T18:08:36.5296986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 52%] 2024-08-07T18:08:36.5298240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 52%] 2024-08-07T18:08:36.5299463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 52%] 2024-08-07T18:08:36.5300696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 52%] 2024-08-07T18:08:36.5302020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 52%] 2024-08-07T18:08:36.5303344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 52%] 2024-08-07T18:08:36.5304550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 52%] 2024-08-07T18:08:36.5306045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 52%] 2024-08-07T18:08:36.5308096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0095s] [ 52%] 2024-08-07T18:08:36.5309889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0099s] [ 52%] 2024-08-07T18:08:36.5311727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 52%] 2024-08-07T18:08:36.5313324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 52%] 2024-08-07T18:08:36.5315262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0105s] [ 52%] 2024-08-07T18:08:36.5316820Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 52%] 2024-08-07T18:08:36.5318536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0062s] [ 52%] 2024-08-07T18:08:36.5320108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 52%] 2024-08-07T18:08:36.5322013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 52%] 2024-08-07T18:08:36.5324037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 52%] 2024-08-07T18:08:36.5325492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 52%] 2024-08-07T18:08:36.5327233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 52%] 2024-08-07T18:08:36.5329158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 52%] 2024-08-07T18:08:36.5330848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 52%] 2024-08-07T18:08:36.5332593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 52%] 2024-08-07T18:08:36.5334345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 52%] 2024-08-07T18:08:36.5335918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0109s] [ 52%] 2024-08-07T18:08:36.5337964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 52%] 2024-08-07T18:08:36.5340159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0053s] [ 52%] 2024-08-07T18:08:36.5341991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 52%] 2024-08-07T18:08:36.5343880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0122s] [ 52%] 2024-08-07T18:08:36.5345847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0124s] [ 52%] 2024-08-07T18:08:36.5347235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 52%] 2024-08-07T18:08:36.5348484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 52%] 2024-08-07T18:08:36.5349707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 52%] 2024-08-07T18:08:36.5350919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 52%] 2024-08-07T18:08:36.5352233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 52%] 2024-08-07T18:08:36.5353559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 52%] 2024-08-07T18:08:36.5354836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 52%] 2024-08-07T18:08:36.5356053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 52%] 2024-08-07T18:08:36.5357287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 52%] 2024-08-07T18:08:36.5358513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 52%] 2024-08-07T18:08:36.5359722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0085s] [ 52%] 2024-08-07T18:08:36.5360999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 52%] 2024-08-07T18:08:36.5362297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 52%] 2024-08-07T18:08:36.5363518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 52%] 2024-08-07T18:08:36.5364724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0095s] [ 53%] 2024-08-07T18:08:36.5366001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 53%] 2024-08-07T18:08:36.5367222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 53%] 2024-08-07T18:08:36.5368461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 53%] 2024-08-07T18:08:36.5369661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 53%] 2024-08-07T18:08:36.5370921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 53%] 2024-08-07T18:08:36.5372219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 53%] 2024-08-07T18:08:36.5373433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 53%] 2024-08-07T18:08:36.5374654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 53%] 2024-08-07T18:08:36.5375871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 53%] 2024-08-07T18:08:36.5377101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 53%] 2024-08-07T18:08:36.5378314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 53%] 2024-08-07T18:08:36.5379581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0056s] [ 53%] 2024-08-07T18:08:36.5380855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 53%] 2024-08-07T18:08:36.5382085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 53%] 2024-08-07T18:08:36.5383319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 53%] 2024-08-07T18:08:36.5384534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 53%] 2024-08-07T18:08:36.5385783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 53%] 2024-08-07T18:08:36.5386994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0061s] [ 53%] 2024-08-07T18:08:36.5388229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 53%] 2024-08-07T18:08:36.5389476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 53%] 2024-08-07T18:08:36.5390764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 53%] 2024-08-07T18:08:36.5391972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 53%] 2024-08-07T18:08:36.5393182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 53%] 2024-08-07T18:08:36.5394407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 53%] 2024-08-07T18:08:36.5395963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 53%] 2024-08-07T18:08:36.5397204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 53%] 2024-08-07T18:08:36.5398410Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 53%] 2024-08-07T18:08:36.5399740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0055s] [ 53%] 2024-08-07T18:08:36.5401040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 53%] 2024-08-07T18:08:36.5402250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 53%] 2024-08-07T18:08:36.5403487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0053s] [ 53%] 2024-08-07T18:08:36.5404708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0063s] [ 53%] 2024-08-07T18:08:36.5405944Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 53%] 2024-08-07T18:08:36.5407151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 53%] 2024-08-07T18:08:36.5408456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 53%] 2024-08-07T18:08:36.5409717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 53%] 2024-08-07T18:08:36.5410939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 53%] 2024-08-07T18:08:36.5412147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 53%] 2024-08-07T18:08:36.5413360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 53%] 2024-08-07T18:08:36.5414588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 53%] 2024-08-07T18:08:36.5415795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 53%] 2024-08-07T18:08:36.5417016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 53%] 2024-08-07T18:08:36.5418270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 53%] 2024-08-07T18:08:36.5419545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 53%] 2024-08-07T18:08:36.5420753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 53%] 2024-08-07T18:08:36.5422019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 53%] 2024-08-07T18:08:36.5423252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 53%] 2024-08-07T18:08:36.5424460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 53%] 2024-08-07T18:08:36.5425692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 53%] 2024-08-07T18:08:36.5426945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 53%] 2024-08-07T18:08:36.5428237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 53%] 2024-08-07T18:08:36.5429436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 53%] 2024-08-07T18:08:36.5430653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 53%] 2024-08-07T18:08:36.5431856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 53%] 2024-08-07T18:08:36.5433096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 53%] 2024-08-07T18:08:36.5434295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 53%] 2024-08-07T18:08:36.5435512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 53%] 2024-08-07T18:08:36.5436768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 53%] 2024-08-07T18:08:36.5438073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 53%] 2024-08-07T18:08:36.5439291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 53%] 2024-08-07T18:08:36.5440496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 53%] 2024-08-07T18:08:36.5441719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0053s] [ 53%] 2024-08-07T18:08:36.5442948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 53%] 2024-08-07T18:08:36.5444166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 53%] 2024-08-07T18:08:36.5445423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 53%] 2024-08-07T18:08:36.5446687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0062s] [ 53%] 2024-08-07T18:08:36.5447914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 53%] 2024-08-07T18:08:36.5449107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 53%] 2024-08-07T18:08:36.5450327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 53%] 2024-08-07T18:08:36.5451526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 53%] 2024-08-07T18:08:36.5452761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 53%] 2024-08-07T18:08:36.5453956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 53%] 2024-08-07T18:08:36.5455219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 53%] 2024-08-07T18:08:36.5456470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 53%] 2024-08-07T18:08:36.5457674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 53%] 2024-08-07T18:08:36.5458897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 53%] 2024-08-07T18:08:36.5460106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 54%] 2024-08-07T18:08:36.5461331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 54%] 2024-08-07T18:08:36.5462558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0053s] [ 54%] 2024-08-07T18:08:36.5463786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0063s] [ 54%] 2024-08-07T18:08:36.5465036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 54%] 2024-08-07T18:08:36.5466324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 54%] 2024-08-07T18:08:36.5467537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 54%] 2024-08-07T18:08:36.5468734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 54%] 2024-08-07T18:08:36.5469959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 54%] 2024-08-07T18:08:36.5471155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 54%] 2024-08-07T18:08:36.5472401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 54%] 2024-08-07T18:08:36.5473645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 54%] 2024-08-07T18:08:36.5474927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 54%] 2024-08-07T18:08:36.5476126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 54%] 2024-08-07T18:08:36.5477341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 54%] 2024-08-07T18:08:36.5478547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 54%] 2024-08-07T18:08:36.5479766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 54%] 2024-08-07T18:08:36.5480986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 54%] 2024-08-07T18:08:36.5482216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 54%] 2024-08-07T18:08:36.5483488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 54%] 2024-08-07T18:08:36.5484758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 54%] 2024-08-07T18:08:36.5485986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 54%] 2024-08-07T18:08:36.5487200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 54%] 2024-08-07T18:08:36.5488418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 54%] 2024-08-07T18:08:36.5489637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 54%] 2024-08-07T18:08:36.5490825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 54%] 2024-08-07T18:08:36.5492106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 54%] 2024-08-07T18:08:36.5493358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 54%] 2024-08-07T18:08:36.5494586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 54%] 2024-08-07T18:08:36.5496073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 54%] 2024-08-07T18:08:36.5497353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 54%] 2024-08-07T18:08:36.5498562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0056s] [ 54%] 2024-08-07T18:08:36.5499797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 54%] 2024-08-07T18:08:36.5501004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0053s] [ 54%] 2024-08-07T18:08:36.5502306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 54%] 2024-08-07T18:08:36.5503613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 54%] 2024-08-07T18:08:36.5504830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 54%] 2024-08-07T18:08:36.5506061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 54%] 2024-08-07T18:08:36.5507285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 54%] 2024-08-07T18:08:36.5508531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 54%] 2024-08-07T18:08:36.5509714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 54%] 2024-08-07T18:08:36.5510995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 54%] 2024-08-07T18:08:36.5512278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 54%] 2024-08-07T18:08:36.5513485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 54%] 2024-08-07T18:08:36.5514714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 54%] 2024-08-07T18:08:36.5515926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 54%] 2024-08-07T18:08:36.5517166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 54%] 2024-08-07T18:08:36.5518367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 54%] 2024-08-07T18:08:36.5519603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0055s] [ 54%] 2024-08-07T18:08:36.5520882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 54%] 2024-08-07T18:08:36.5522205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 54%] 2024-08-07T18:08:36.5523426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 54%] 2024-08-07T18:08:36.5524642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 54%] 2024-08-07T18:08:36.5525895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 54%] 2024-08-07T18:08:36.5527119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 54%] 2024-08-07T18:08:36.5528339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 54%] 2024-08-07T18:08:36.5529536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 54%] 2024-08-07T18:08:36.5530796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 54%] 2024-08-07T18:08:36.5532050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 54%] 2024-08-07T18:08:36.5533283Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 54%] 2024-08-07T18:08:36.5534492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 54%] 2024-08-07T18:08:36.5535696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 54%] 2024-08-07T18:08:36.5536924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 54%] 2024-08-07T18:08:36.5538154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0166s] [ 54%] 2024-08-07T18:08:36.5539467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0175s] [ 54%] 2024-08-07T18:08:36.5540747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0087s] [ 54%] 2024-08-07T18:08:36.5541997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 54%] 2024-08-07T18:08:36.5543242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0241s] [ 54%] 2024-08-07T18:08:36.5544506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0243s] [ 54%] 2024-08-07T18:08:36.5545744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 54%] 2024-08-07T18:08:36.5546982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0095s] [ 54%] 2024-08-07T18:08:36.5548209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0165s] [ 54%] 2024-08-07T18:08:36.5549481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0173s] [ 54%] 2024-08-07T18:08:36.5550767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0089s] [ 54%] 2024-08-07T18:08:36.5551991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 54%] 2024-08-07T18:08:36.5553253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0236s] [ 54%] 2024-08-07T18:08:36.5554486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0237s] [ 54%] 2024-08-07T18:08:36.5555730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0096s] [ 55%] 2024-08-07T18:08:36.5556952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 55%] 2024-08-07T18:08:36.5558220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0169s] [ 55%] 2024-08-07T18:08:36.5559522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0184s] [ 55%] 2024-08-07T18:08:36.5560744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0089s] [ 55%] 2024-08-07T18:08:36.5561991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 55%] 2024-08-07T18:08:36.5563246Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0244s] [ 55%] 2024-08-07T18:08:36.5564519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0247s] [ 55%] 2024-08-07T18:08:36.5565738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0096s] [ 55%] 2024-08-07T18:08:36.5566988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0095s] [ 55%] 2024-08-07T18:08:36.5568255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0168s] [ 55%] 2024-08-07T18:08:36.5569531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0179s] [ 55%] 2024-08-07T18:08:36.5570762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0089s] [ 55%] 2024-08-07T18:08:36.5571992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 55%] 2024-08-07T18:08:36.5573268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0239s] [ 55%] 2024-08-07T18:08:36.5574496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0241s] [ 55%] 2024-08-07T18:08:36.5575746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0097s] [ 55%] 2024-08-07T18:08:36.5577016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0099s] [ 55%] 2024-08-07T18:08:36.5578313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0189s] [ 55%] 2024-08-07T18:08:36.5579549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0206s] [ 55%] 2024-08-07T18:08:36.5580766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0096s] [ 55%] 2024-08-07T18:08:36.5582016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0096s] [ 55%] 2024-08-07T18:08:36.5583270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0264s] [ 55%] 2024-08-07T18:08:36.5584522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0267s] [ 55%] 2024-08-07T18:08:36.5585752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0104s] [ 55%] 2024-08-07T18:08:36.5587049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0104s] [ 55%] 2024-08-07T18:08:36.5588397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0187s] [ 55%] 2024-08-07T18:08:36.5589641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0202s] [ 55%] 2024-08-07T18:08:36.5590862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0097s] [ 55%] 2024-08-07T18:08:36.5592092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0098s] [ 55%] 2024-08-07T18:08:36.5593352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0256s] [ 55%] 2024-08-07T18:08:36.5594580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0260s] [ 55%] 2024-08-07T18:08:36.5596209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0108s] [ 55%] 2024-08-07T18:08:36.5597543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0105s] [ 55%] 2024-08-07T18:08:36.5598779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0159s] [ 55%] 2024-08-07T18:08:36.5600002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0164s] [ 55%] 2024-08-07T18:08:36.5601250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 55%] 2024-08-07T18:08:36.5602487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 55%] 2024-08-07T18:08:36.5603742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0235s] [ 55%] 2024-08-07T18:08:36.5604982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0237s] [ 55%] 2024-08-07T18:08:36.5606273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0093s] [ 55%] 2024-08-07T18:08:36.5607611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 55%] 2024-08-07T18:08:36.5608830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0160s] [ 55%] 2024-08-07T18:08:36.5610075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0164s] [ 55%] 2024-08-07T18:08:36.5611300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0086s] [ 55%] 2024-08-07T18:08:36.5612544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 55%] 2024-08-07T18:08:36.5613830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0230s] [ 55%] 2024-08-07T18:08:36.5615104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0233s] [ 55%] 2024-08-07T18:08:36.5616395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 55%] 2024-08-07T18:08:36.5617621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 55%] 2024-08-07T18:08:36.5618864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0268s] [ 55%] 2024-08-07T18:08:36.5620103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0294s] [ 55%] 2024-08-07T18:08:36.5621352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0106s] [ 55%] 2024-08-07T18:08:36.5622630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0102s] [ 55%] 2024-08-07T18:08:36.5623895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0414s] [ 55%] 2024-08-07T18:08:36.5625178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0425s] [ 55%] 2024-08-07T18:08:36.5626485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0117s] [ 55%] 2024-08-07T18:08:36.5627718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0116s] [ 55%] 2024-08-07T18:08:36.5628942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0239s] [ 55%] 2024-08-07T18:08:36.5630193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0258s] [ 55%] 2024-08-07T18:08:36.5631414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0103s] [ 55%] 2024-08-07T18:08:36.5632653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0103s] [ 55%] 2024-08-07T18:08:36.5633939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0362s] [ 55%] 2024-08-07T18:08:36.5635235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0370s] [ 55%] 2024-08-07T18:08:36.5636456Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0116s] [ 55%] 2024-08-07T18:08:36.5637697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0118s] [ 55%] 2024-08-07T18:08:36.5638926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0277s] [ 55%] 2024-08-07T18:08:36.5640165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0312s] [ 55%] 2024-08-07T18:08:36.5641405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0110s] [ 55%] 2024-08-07T18:08:36.5642632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0110s] [ 55%] 2024-08-07T18:08:36.5643945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0421s] [ 55%] 2024-08-07T18:08:36.5645231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0430s] [ 55%] 2024-08-07T18:08:36.5646468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0124s] [ 55%] 2024-08-07T18:08:36.5647709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0123s] [ 55%] 2024-08-07T18:08:36.5648948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0250s] [ 55%] 2024-08-07T18:08:36.5650170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0274s] [ 55%] 2024-08-07T18:08:36.5651381Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0107s] [ 55%] 2024-08-07T18:08:36.5652670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0108s] [ 55%] 2024-08-07T18:08:36.5654009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0370s] [ 56%] 2024-08-07T18:08:36.5655256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0378s] [ 56%] 2024-08-07T18:08:36.5656474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0124s] [ 56%] 2024-08-07T18:08:36.5657724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0120s] [ 56%] 2024-08-07T18:08:36.5658957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0312s] [ 56%] 2024-08-07T18:08:36.5660206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0352s] [ 56%] 2024-08-07T18:08:36.5661432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0122s] [ 56%] 2024-08-07T18:08:36.5662714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0121s] [ 56%] 2024-08-07T18:08:36.5664024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0454s] [ 56%] 2024-08-07T18:08:36.5665257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0460s] [ 56%] 2024-08-07T18:08:36.5666506Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0133s] [ 56%] 2024-08-07T18:08:36.5667752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0134s] [ 56%] 2024-08-07T18:08:36.5668988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0279s] [ 56%] 2024-08-07T18:08:36.5670210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0309s] [ 56%] 2024-08-07T18:08:36.5671489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0122s] [ 56%] 2024-08-07T18:08:36.5672761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0122s] [ 56%] 2024-08-07T18:08:36.5673996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0401s] [ 56%] 2024-08-07T18:08:36.5675236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0407s] [ 56%] 2024-08-07T18:08:36.5676473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0131s] [ 56%] 2024-08-07T18:08:36.5677719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0132s] [ 56%] 2024-08-07T18:08:36.5678938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0257s] [ 56%] 2024-08-07T18:08:36.5680242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0276s] [ 56%] 2024-08-07T18:08:36.5681516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0104s] [ 56%] 2024-08-07T18:08:36.5682762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0101s] [ 56%] 2024-08-07T18:08:36.5683998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0407s] [ 56%] 2024-08-07T18:08:36.5685236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0419s] [ 56%] 2024-08-07T18:08:36.5686491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0116s] [ 56%] 2024-08-07T18:08:36.5687720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0114s] [ 56%] 2024-08-07T18:08:36.5688950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0232s] [ 56%] 2024-08-07T18:08:36.5690224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0246s] [ 56%] 2024-08-07T18:08:36.5691507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0102s] [ 56%] 2024-08-07T18:08:36.5692725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0102s] [ 56%] 2024-08-07T18:08:36.5693987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0356s] [ 56%] 2024-08-07T18:08:36.5695446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0362s] [ 56%] 2024-08-07T18:08:36.5696851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0115s] [ 56%] 2024-08-07T18:08:36.5698090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0114s] [ 56%] 2024-08-07T18:08:36.5699396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0108s] [ 56%] 2024-08-07T18:08:36.5700746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 56%] 2024-08-07T18:08:36.5701961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0076s] [ 56%] 2024-08-07T18:08:36.5703192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 56%] 2024-08-07T18:08:36.5704432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0148s] [ 56%] 2024-08-07T18:08:36.5705684Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0145s] [ 56%] 2024-08-07T18:08:36.5706902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0083s] [ 56%] 2024-08-07T18:08:36.5708125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 56%] 2024-08-07T18:08:36.5709422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0111s] [ 56%] 2024-08-07T18:08:36.5710709Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0111s] [ 56%] 2024-08-07T18:08:36.5711933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 56%] 2024-08-07T18:08:36.5713148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 56%] 2024-08-07T18:08:36.5714397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0147s] [ 56%] 2024-08-07T18:08:36.5715620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0149s] [ 56%] 2024-08-07T18:08:36.5716854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 56%] 2024-08-07T18:08:36.5718073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 56%] 2024-08-07T18:08:36.5719334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0112s] [ 56%] 2024-08-07T18:08:36.5720620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 56%] 2024-08-07T18:08:36.5721869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 56%] 2024-08-07T18:08:36.5723120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 56%] 2024-08-07T18:08:36.5724369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0151s] [ 56%] 2024-08-07T18:08:36.5725613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0150s] [ 56%] 2024-08-07T18:08:36.5726826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 56%] 2024-08-07T18:08:36.5728118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 56%] 2024-08-07T18:08:36.5729378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0115s] [ 56%] 2024-08-07T18:08:36.5730594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0115s] [ 56%] 2024-08-07T18:08:36.5731818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 56%] 2024-08-07T18:08:36.5733036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 56%] 2024-08-07T18:08:36.5734297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0154s] [ 56%] 2024-08-07T18:08:36.5735514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0153s] [ 56%] 2024-08-07T18:08:36.5736737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 56%] 2024-08-07T18:08:36.5738021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 56%] 2024-08-07T18:08:36.5739310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0126s] [ 56%] 2024-08-07T18:08:36.5740529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0128s] [ 56%] 2024-08-07T18:08:36.5741745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 56%] 2024-08-07T18:08:36.5743010Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 56%] 2024-08-07T18:08:36.5744238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0165s] [ 56%] 2024-08-07T18:08:36.5745483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0165s] [ 56%] 2024-08-07T18:08:36.5746754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 56%] 2024-08-07T18:08:36.5748073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 56%] 2024-08-07T18:08:36.5749272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0131s] [ 56%] 2024-08-07T18:08:36.5750500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0130s] [ 57%] 2024-08-07T18:08:36.5751711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0085s] [ 57%] 2024-08-07T18:08:36.5752933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 57%] 2024-08-07T18:08:36.5754186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0169s] [ 57%] 2024-08-07T18:08:36.5755407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0167s] [ 57%] 2024-08-07T18:08:36.5756687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0093s] [ 57%] 2024-08-07T18:08:36.5757957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 57%] 2024-08-07T18:08:36.5759179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0103s] [ 57%] 2024-08-07T18:08:36.5760398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0104s] [ 57%] 2024-08-07T18:08:36.5761642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0074s] [ 57%] 2024-08-07T18:08:36.5762860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 57%] 2024-08-07T18:08:36.5764094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0143s] [ 57%] 2024-08-07T18:08:36.5765379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0142s] [ 57%] 2024-08-07T18:08:36.5766652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0083s] [ 57%] 2024-08-07T18:08:36.5767892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 57%] 2024-08-07T18:08:36.5769095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0107s] [ 57%] 2024-08-07T18:08:36.5770323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 57%] 2024-08-07T18:08:36.5771533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 57%] 2024-08-07T18:08:36.5772763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 57%] 2024-08-07T18:08:36.5773989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0144s] [ 57%] 2024-08-07T18:08:36.5775247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0146s] [ 57%] 2024-08-07T18:08:36.5776530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 57%] 2024-08-07T18:08:36.5777743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 57%] 2024-08-07T18:08:36.5778985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0487s] [ 57%] 2024-08-07T18:08:36.5780217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0546s] [ 57%] 2024-08-07T18:08:36.5781465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0143s] [ 57%] 2024-08-07T18:08:36.5782693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0142s] [ 57%] 2024-08-07T18:08:36.5784008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0771s] [ 57%] 2024-08-07T18:08:36.5785292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0810s] [ 57%] 2024-08-07T18:08:36.5786518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0168s] [ 57%] 2024-08-07T18:08:36.5787770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0167s] [ 57%] 2024-08-07T18:08:36.5788991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0336s] [ 57%] 2024-08-07T18:08:36.5790238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0365s] [ 57%] 2024-08-07T18:08:36.5791448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0136s] [ 57%] 2024-08-07T18:08:36.5792686Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0132s] [ 57%] 2024-08-07T18:08:36.5793976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0508s] [ 57%] 2024-08-07T18:08:36.5795531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0524s] [ 57%] 2024-08-07T18:08:36.5796774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0157s] [ 57%] 2024-08-07T18:08:36.5798006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0156s] [ 57%] 2024-08-07T18:08:36.5799260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0509s] [ 57%] 2024-08-07T18:08:36.5800488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0577s] [ 57%] 2024-08-07T18:08:36.5801724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0154s] [ 57%] 2024-08-07T18:08:36.5803027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0151s] [ 57%] 2024-08-07T18:08:36.5804367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0786s] [ 57%] 2024-08-07T18:08:36.5805597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0813s] [ 57%] 2024-08-07T18:08:36.5806840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0179s] [ 57%] 2024-08-07T18:08:36.5808083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0179s] [ 57%] 2024-08-07T18:08:36.5809304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0353s] [ 57%] 2024-08-07T18:08:36.5810544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0387s] [ 57%] 2024-08-07T18:08:36.5811760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0142s] [ 57%] 2024-08-07T18:08:36.5813082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0143s] [ 57%] 2024-08-07T18:08:36.5814397Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0524s] [ 57%] 2024-08-07T18:08:36.5815714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0534s] [ 57%] 2024-08-07T18:08:36.5816942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0165s] [ 57%] 2024-08-07T18:08:36.5818195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0165s] [ 57%] 2024-08-07T18:08:36.5819416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0577s] [ 57%] 2024-08-07T18:08:36.5820641Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0651s] [ 57%] 2024-08-07T18:08:36.5821993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0173s] [ 57%] 2024-08-07T18:08:36.5823291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0173s] [ 57%] 2024-08-07T18:08:36.5824567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0837s] [ 57%] 2024-08-07T18:08:36.5825797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0860s] [ 57%] 2024-08-07T18:08:36.5827047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0196s] [ 57%] 2024-08-07T18:08:36.5828285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0195s] [ 57%] 2024-08-07T18:08:36.5829519Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0399s] [ 57%] 2024-08-07T18:08:36.5830740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0434s] [ 57%] 2024-08-07T18:08:36.5832023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0161s] [ 57%] 2024-08-07T18:08:36.5833302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0161s] [ 57%] 2024-08-07T18:08:36.5834556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0562s] [ 57%] 2024-08-07T18:08:36.5835805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0569s] [ 57%] 2024-08-07T18:08:36.5837036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0181s] [ 57%] 2024-08-07T18:08:36.5838281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0182s] [ 57%] 2024-08-07T18:08:36.5839502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0472s] [ 57%] 2024-08-07T18:08:36.5840790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0517s] [ 57%] 2024-08-07T18:08:36.5842063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0143s] [ 57%] 2024-08-07T18:08:36.5843309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0140s] [ 57%] 2024-08-07T18:08:36.5844559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0761s] [ 57%] 2024-08-07T18:08:36.5845797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0793s] [ 57%] 2024-08-07T18:08:36.5847052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0165s] [ 57%] 2024-08-07T18:08:36.5848282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0163s] [ 58%] 2024-08-07T18:08:36.5849511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0320s] [ 58%] 2024-08-07T18:08:36.5850781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0344s] [ 58%] 2024-08-07T18:08:36.5852069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0134s] [ 58%] 2024-08-07T18:08:36.5853290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0131s] [ 58%] 2024-08-07T18:08:36.5854555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0500s] [ 58%] 2024-08-07T18:08:36.5855794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0513s] [ 58%] 2024-08-07T18:08:36.5857011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0153s] [ 58%] 2024-08-07T18:08:36.5858250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0153s] [ 58%] 2024-08-07T18:08:36.5859513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0117s] [ 58%] 2024-08-07T18:08:36.5860806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0119s] [ 58%] 2024-08-07T18:08:36.5862022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 58%] 2024-08-07T18:08:36.5863260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 58%] 2024-08-07T18:08:36.5864508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0160s] [ 58%] 2024-08-07T18:08:36.5865761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0154s] [ 58%] 2024-08-07T18:08:36.5866979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 58%] 2024-08-07T18:08:36.5868201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 58%] 2024-08-07T18:08:36.5869482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0119s] [ 58%] 2024-08-07T18:08:36.5870754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0122s] [ 58%] 2024-08-07T18:08:36.5871984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 58%] 2024-08-07T18:08:36.5873206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 58%] 2024-08-07T18:08:36.5874469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0163s] [ 58%] 2024-08-07T18:08:36.5875701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0158s] [ 58%] 2024-08-07T18:08:36.5876927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 58%] 2024-08-07T18:08:36.5878187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 58%] 2024-08-07T18:08:36.5879459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0119s] [ 58%] 2024-08-07T18:08:36.5880696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0124s] [ 58%] 2024-08-07T18:08:36.5881909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 58%] 2024-08-07T18:08:36.5883153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 58%] 2024-08-07T18:08:36.5884402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0163s] [ 58%] 2024-08-07T18:08:36.5885649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0163s] [ 58%] 2024-08-07T18:08:36.5886867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 58%] 2024-08-07T18:08:36.5888157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 58%] 2024-08-07T18:08:36.5889438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0123s] [ 58%] 2024-08-07T18:08:36.5890655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0127s] [ 58%] 2024-08-07T18:08:36.5891883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 58%] 2024-08-07T18:08:36.5893107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 58%] 2024-08-07T18:08:36.5894361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0165s] [ 58%] 2024-08-07T18:08:36.5895823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0166s] [ 58%] 2024-08-07T18:08:36.5897156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0093s] [ 58%] 2024-08-07T18:08:36.5898454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 58%] 2024-08-07T18:08:36.5899692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0132s] [ 58%] 2024-08-07T18:08:36.5900913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0137s] [ 58%] 2024-08-07T18:08:36.5902134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0085s] [ 58%] 2024-08-07T18:08:36.5903385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 58%] 2024-08-07T18:08:36.5904632Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0172s] [ 58%] 2024-08-07T18:08:36.5905885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0173s] [ 58%] 2024-08-07T18:08:36.5907172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 58%] 2024-08-07T18:08:36.5908488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 58%] 2024-08-07T18:08:36.5909701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0134s] [ 58%] 2024-08-07T18:08:36.5910943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0139s] [ 58%] 2024-08-07T18:08:36.5912176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0088s] [ 58%] 2024-08-07T18:08:36.5913393Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 58%] 2024-08-07T18:08:36.5914659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0175s] [ 58%] 2024-08-07T18:08:36.5915923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0175s] [ 58%] 2024-08-07T18:08:36.5917210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 58%] 2024-08-07T18:08:36.5918437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 58%] 2024-08-07T18:08:36.5919669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0114s] [ 58%] 2024-08-07T18:08:36.5920894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 58%] 2024-08-07T18:08:36.5922185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 58%] 2024-08-07T18:08:36.5923412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 58%] 2024-08-07T18:08:36.5924659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0157s] [ 58%] 2024-08-07T18:08:36.5925954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0158s] [ 58%] 2024-08-07T18:08:36.5927238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 58%] 2024-08-07T18:08:36.5928482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 58%] 2024-08-07T18:08:36.5929701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0118s] [ 58%] 2024-08-07T18:08:36.5930951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0117s] [ 58%] 2024-08-07T18:08:36.5932160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 58%] 2024-08-07T18:08:36.5933391Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 58%] 2024-08-07T18:08:36.5934671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0160s] [ 58%] 2024-08-07T18:08:36.5935947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0161s] [ 58%] 2024-08-07T18:08:36.5937178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0088s] [ 58%] 2024-08-07T18:08:36.5938396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 58%] 2024-08-07T18:08:36.5939637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0109s] [ 58%] 2024-08-07T18:08:36.5940870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0109s] [ 58%] 2024-08-07T18:08:36.5942105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0076s] [ 58%] 2024-08-07T18:08:36.5943325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 58%] 2024-08-07T18:08:36.5944622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0144s] [ 59%] 2024-08-07T18:08:36.5945902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0144s] [ 59%] 2024-08-07T18:08:36.5947120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 59%] 2024-08-07T18:08:36.5948369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 59%] 2024-08-07T18:08:36.5949592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0111s] [ 59%] 2024-08-07T18:08:36.5950827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0111s] [ 59%] 2024-08-07T18:08:36.5952035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 59%] 2024-08-07T18:08:36.5953366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 59%] 2024-08-07T18:08:36.5954644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0146s] [ 59%] 2024-08-07T18:08:36.5955881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0146s] [ 59%] 2024-08-07T18:08:36.5957090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0086s] [ 59%] 2024-08-07T18:08:36.5958314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 59%] 2024-08-07T18:08:36.5959560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0114s] [ 59%] 2024-08-07T18:08:36.5960782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 59%] 2024-08-07T18:08:36.5962020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 59%] 2024-08-07T18:08:36.5963459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 59%] 2024-08-07T18:08:36.5964763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0150s] [ 59%] 2024-08-07T18:08:36.5965988Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0150s] [ 59%] 2024-08-07T18:08:36.5967222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0090s] [ 59%] 2024-08-07T18:08:36.5968445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 59%] 2024-08-07T18:08:36.5969721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0119s] [ 59%] 2024-08-07T18:08:36.5970959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0120s] [ 59%] 2024-08-07T18:08:36.5972208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 59%] 2024-08-07T18:08:36.5973491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 59%] 2024-08-07T18:08:36.5974723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0152s] [ 59%] 2024-08-07T18:08:36.5975959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0152s] [ 59%] 2024-08-07T18:08:36.5977168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 59%] 2024-08-07T18:08:36.5978406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 59%] 2024-08-07T18:08:36.5979618Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0117s] [ 59%] 2024-08-07T18:08:36.5980830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0129s] [ 59%] 2024-08-07T18:08:36.5982102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 59%] 2024-08-07T18:08:36.5983374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 59%] 2024-08-07T18:08:36.5984610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0162s] [ 59%] 2024-08-07T18:08:36.5985834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0162s] [ 59%] 2024-08-07T18:08:36.5987073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0083s] [ 59%] 2024-08-07T18:08:36.5988302Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 59%] 2024-08-07T18:08:36.5989522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0122s] [ 59%] 2024-08-07T18:08:36.5990781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0131s] [ 59%] 2024-08-07T18:08:36.5992042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0084s] [ 59%] 2024-08-07T18:08:36.5993264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 59%] 2024-08-07T18:08:36.5994471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0174s] [ 59%] 2024-08-07T18:08:36.5996075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0173s] [ 59%] 2024-08-07T18:08:36.5997328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0093s] [ 59%] 2024-08-07T18:08:36.5998565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 59%] 2024-08-07T18:08:36.5999784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0105s] [ 59%] 2024-08-07T18:08:36.6001092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0095s] [ 59%] 2024-08-07T18:08:36.6003079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 59%] 2024-08-07T18:08:36.6004360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 59%] 2024-08-07T18:08:36.6005597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0143s] [ 59%] 2024-08-07T18:08:36.6006835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0144s] [ 59%] 2024-08-07T18:08:36.6008078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0085s] [ 59%] 2024-08-07T18:08:36.6009298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 59%] 2024-08-07T18:08:36.6010593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0108s] [ 59%] 2024-08-07T18:08:36.6011873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 59%] 2024-08-07T18:08:36.6013096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 59%] 2024-08-07T18:08:36.6014303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 59%] 2024-08-07T18:08:36.6015515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0145s] [ 59%] 2024-08-07T18:08:36.6016771Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0143s] [ 59%] 2024-08-07T18:08:36.6017979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0085s] [ 59%] 2024-08-07T18:08:36.6019208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 59%] 2024-08-07T18:08:36.6020476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 59%] 2024-08-07T18:08:36.6021808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 59%] 2024-08-07T18:08:36.6023032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 59%] 2024-08-07T18:08:36.6024276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 59%] 2024-08-07T18:08:36.6025503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 59%] 2024-08-07T18:08:36.6026753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 59%] 2024-08-07T18:08:36.6029149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 59%] 2024-08-07T18:08:36.6031517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 59%] 2024-08-07T18:08:36.6033965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 59%] 2024-08-07T18:08:36.6036353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 59%] 2024-08-07T18:08:36.6038676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 59%] 2024-08-07T18:08:36.6040976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 59%] 2024-08-07T18:08:36.6043296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 59%] 2024-08-07T18:08:36.6045620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 59%] 2024-08-07T18:08:36.6047930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 59%] 2024-08-07T18:08:36.6051881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 59%] 2024-08-07T18:08:36.6054252Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 59%] 2024-08-07T18:08:36.6056569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 60%] 2024-08-07T18:08:36.6058879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 60%] 2024-08-07T18:08:36.6061194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 60%] 2024-08-07T18:08:36.6063546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 60%] 2024-08-07T18:08:36.6065867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 60%] 2024-08-07T18:08:36.6068237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 60%] 2024-08-07T18:08:36.6071801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 60%] 2024-08-07T18:08:36.6074132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 60%] 2024-08-07T18:08:36.6076430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 60%] 2024-08-07T18:08:36.6078729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 60%] 2024-08-07T18:08:36.6081028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 60%] 2024-08-07T18:08:36.6083335Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 60%] 2024-08-07T18:08:36.6085654Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 60%] 2024-08-07T18:08:36.6088313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 60%] 2024-08-07T18:08:36.6091971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 60%] 2024-08-07T18:08:36.6094275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0070s] [ 60%] 2024-08-07T18:08:36.6096867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 60%] 2024-08-07T18:08:36.6099213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 60%] 2024-08-07T18:08:36.6101543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 60%] 2024-08-07T18:08:36.6103916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 60%] 2024-08-07T18:08:36.6106355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 60%] 2024-08-07T18:08:36.6110276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 60%] 2024-08-07T18:08:36.6112645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 60%] 2024-08-07T18:08:36.6114954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 60%] 2024-08-07T18:08:36.6117257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 60%] 2024-08-07T18:08:36.6119554Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 60%] 2024-08-07T18:08:36.6121900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 60%] 2024-08-07T18:08:36.6124276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 60%] 2024-08-07T18:08:36.6126693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 60%] 2024-08-07T18:08:36.6129078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 60%] 2024-08-07T18:08:36.6131374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 60%] 2024-08-07T18:08:36.6133690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 60%] 2024-08-07T18:08:36.6135999Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 60%] 2024-08-07T18:08:36.6138323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 60%] 2024-08-07T18:08:36.6140624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 60%] 2024-08-07T18:08:36.6142986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 60%] 2024-08-07T18:08:36.6145362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 60%] 2024-08-07T18:08:36.6147683Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 60%] 2024-08-07T18:08:36.6149993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 60%] 2024-08-07T18:08:36.6152299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 60%] 2024-08-07T18:08:36.6154608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 60%] 2024-08-07T18:08:36.6156875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 60%] 2024-08-07T18:08:36.6159185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 60%] 2024-08-07T18:08:36.6161540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 60%] 2024-08-07T18:08:36.6163917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 60%] 2024-08-07T18:08:36.6166234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 60%] 2024-08-07T18:08:36.6168510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 60%] 2024-08-07T18:08:36.6170822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0088s] [ 60%] 2024-08-07T18:08:36.6173145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 60%] 2024-08-07T18:08:36.6175518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 60%] 2024-08-07T18:08:36.6177832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 60%] 2024-08-07T18:08:36.6180225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0110s] [ 60%] 2024-08-07T18:08:36.6182595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0111s] [ 60%] 2024-08-07T18:08:36.6184914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 60%] 2024-08-07T18:08:36.6187261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 60%] 2024-08-07T18:08:36.6189583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 60%] 2024-08-07T18:08:36.6191904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 60%] 2024-08-07T18:08:36.6194187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 60%] 2024-08-07T18:08:36.6196875Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 60%] 2024-08-07T18:08:36.6199290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 60%] 2024-08-07T18:08:36.6201622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 60%] 2024-08-07T18:08:36.6203929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 60%] 2024-08-07T18:08:36.6206217Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 60%] 2024-08-07T18:08:36.6208532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0089s] [ 60%] 2024-08-07T18:08:36.6210845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0098s] [ 60%] 2024-08-07T18:08:36.6213216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 60%] 2024-08-07T18:08:36.6215759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 60%] 2024-08-07T18:08:36.6218191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0111s] [ 60%] 2024-08-07T18:08:36.6220537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0113s] [ 60%] 2024-08-07T18:08:36.6222893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 60%] 2024-08-07T18:08:36.6225306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 60%] 2024-08-07T18:08:36.6227616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 60%] 2024-08-07T18:08:36.6229918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 60%] 2024-08-07T18:08:36.6232249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 60%] 2024-08-07T18:08:36.6234596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 60%] 2024-08-07T18:08:36.6236893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 60%] 2024-08-07T18:08:36.6239288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 60%] 2024-08-07T18:08:36.6241609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 60%] 2024-08-07T18:08:36.6243905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 61%] 2024-08-07T18:08:36.6246212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0096s] [ 61%] 2024-08-07T18:08:36.6248521Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 61%] 2024-08-07T18:08:36.6250892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 61%] 2024-08-07T18:08:36.6253313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 61%] 2024-08-07T18:08:36.6255606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0118s] [ 61%] 2024-08-07T18:08:36.6257972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0120s] [ 61%] 2024-08-07T18:08:36.6260299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 61%] 2024-08-07T18:08:36.6262615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 61%] 2024-08-07T18:08:36.6264915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 61%] 2024-08-07T18:08:36.6267238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 61%] 2024-08-07T18:08:36.6269609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 61%] 2024-08-07T18:08:36.6271908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 61%] 2024-08-07T18:08:36.6274209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 61%] 2024-08-07T18:08:36.6276552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 61%] 2024-08-07T18:08:36.6278867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 61%] 2024-08-07T18:08:36.6281160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 61%] 2024-08-07T18:08:36.6283546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0085s] [ 61%] 2024-08-07T18:08:36.6285901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 61%] 2024-08-07T18:08:36.6288267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 61%] 2024-08-07T18:08:36.6290581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 61%] 2024-08-07T18:08:36.6292885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0107s] [ 61%] 2024-08-07T18:08:36.6295550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0109s] [ 61%] 2024-08-07T18:08:36.6297885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 61%] 2024-08-07T18:08:36.6300193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 61%] 2024-08-07T18:08:36.6302574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 61%] 2024-08-07T18:08:36.6304942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 61%] 2024-08-07T18:08:36.6307300Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 61%] 2024-08-07T18:08:36.6309594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 61%] 2024-08-07T18:08:36.6311893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 61%] 2024-08-07T18:08:36.6314222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 61%] 2024-08-07T18:08:36.6316531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 61%] 2024-08-07T18:08:36.6318827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 61%] 2024-08-07T18:08:36.6321203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 61%] 2024-08-07T18:08:36.6323604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 61%] 2024-08-07T18:08:36.6325897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 61%] 2024-08-07T18:08:36.6328189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0054s] [ 61%] 2024-08-07T18:08:36.6330477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 61%] 2024-08-07T18:08:36.6332796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 61%] 2024-08-07T18:08:36.6335097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 61%] 2024-08-07T18:08:36.6337452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 61%] 2024-08-07T18:08:36.6339790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 61%] 2024-08-07T18:08:36.6342068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 61%] 2024-08-07T18:08:36.6344346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 61%] 2024-08-07T18:08:36.6346629Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 61%] 2024-08-07T18:08:36.6348938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 61%] 2024-08-07T18:08:36.6351315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 61%] 2024-08-07T18:08:36.6353593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 61%] 2024-08-07T18:08:36.6355932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 61%] 2024-08-07T18:08:36.6358281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 61%] 2024-08-07T18:08:36.6360573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 61%] 2024-08-07T18:08:36.6362865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 61%] 2024-08-07T18:08:36.6365141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 61%] 2024-08-07T18:08:36.6367466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0064s] [ 61%] 2024-08-07T18:08:36.6369777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 61%] 2024-08-07T18:08:36.6372127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 61%] 2024-08-07T18:08:36.6374508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 61%] 2024-08-07T18:08:36.6376812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 61%] 2024-08-07T18:08:36.6379093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 61%] 2024-08-07T18:08:36.6381384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 61%] 2024-08-07T18:08:36.6383657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 61%] 2024-08-07T18:08:36.6385945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 61%] 2024-08-07T18:08:36.6388241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 61%] 2024-08-07T18:08:36.6390576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 61%] 2024-08-07T18:08:36.6392924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 61%] 2024-08-07T18:08:36.6395447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 61%] 2024-08-07T18:08:36.6397763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 61%] 2024-08-07T18:08:36.6400091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 61%] 2024-08-07T18:08:36.6402394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 61%] 2024-08-07T18:08:36.6404689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 61%] 2024-08-07T18:08:36.6407097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 61%] 2024-08-07T18:08:36.6409463Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0061s] [ 61%] 2024-08-07T18:08:36.6411762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 61%] 2024-08-07T18:08:36.6414049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 61%] 2024-08-07T18:08:36.6416336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 61%] 2024-08-07T18:08:36.6418624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 61%] 2024-08-07T18:08:36.6420884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 61%] 2024-08-07T18:08:36.6423203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 62%] 2024-08-07T18:08:36.6425569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 62%] 2024-08-07T18:08:36.6427936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 62%] 2024-08-07T18:08:36.6430218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 62%] 2024-08-07T18:08:36.6432503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 62%] 2024-08-07T18:08:36.6434793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 62%] 2024-08-07T18:08:36.6437083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 62%] 2024-08-07T18:08:36.6439365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 62%] 2024-08-07T18:08:36.6441648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 62%] 2024-08-07T18:08:36.6443990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 62%] 2024-08-07T18:08:36.6446344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0061s] [ 62%] 2024-08-07T18:08:36.6448630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 62%] 2024-08-07T18:08:36.6450920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 62%] 2024-08-07T18:08:36.6453207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 62%] 2024-08-07T18:08:36.6455460Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 62%] 2024-08-07T18:08:36.6457753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 62%] 2024-08-07T18:08:36.6460075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 62%] 2024-08-07T18:08:36.6462413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 62%] 2024-08-07T18:08:36.6464707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 62%] 2024-08-07T18:08:36.6466994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 62%] 2024-08-07T18:08:36.6469272Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0132s] [ 62%] 2024-08-07T18:08:36.6471601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0140s] [ 62%] 2024-08-07T18:08:36.6473909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 62%] 2024-08-07T18:08:36.6476215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 62%] 2024-08-07T18:08:36.6478574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0170s] [ 62%] 2024-08-07T18:08:36.6480940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0176s] [ 62%] 2024-08-07T18:08:36.6483293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 62%] 2024-08-07T18:08:36.6485612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 62%] 2024-08-07T18:08:36.6487930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0084s] [ 62%] 2024-08-07T18:08:36.6490236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 62%] 2024-08-07T18:08:36.6492522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0071s] [ 62%] 2024-08-07T18:08:36.6494870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 62%] 2024-08-07T18:08:36.6497547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0089s] [ 62%] 2024-08-07T18:08:36.6499856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 62%] 2024-08-07T18:08:36.6502178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 62%] 2024-08-07T18:08:36.6504466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 62%] 2024-08-07T18:08:36.6506783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0139s] [ 62%] 2024-08-07T18:08:36.6509103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0145s] [ 62%] 2024-08-07T18:08:36.6511417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0075s] [ 62%] 2024-08-07T18:08:36.6513842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 62%] 2024-08-07T18:08:36.6516267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0178s] [ 62%] 2024-08-07T18:08:36.6518567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0181s] [ 62%] 2024-08-07T18:08:36.6520906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 62%] 2024-08-07T18:08:36.6523297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 62%] 2024-08-07T18:08:36.6525598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 62%] 2024-08-07T18:08:36.6527899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 62%] 2024-08-07T18:08:36.6530243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 62%] 2024-08-07T18:08:36.6532606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 62%] 2024-08-07T18:08:36.6534906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0094s] [ 62%] 2024-08-07T18:08:36.6537210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0095s] [ 62%] 2024-08-07T18:08:36.6539531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 62%] 2024-08-07T18:08:36.6541837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 62%] 2024-08-07T18:08:36.6544134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0146s] [ 62%] 2024-08-07T18:08:36.6546442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0161s] [ 62%] 2024-08-07T18:08:36.6548823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0084s] [ 62%] 2024-08-07T18:08:36.6551192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 62%] 2024-08-07T18:08:36.6553526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0190s] [ 62%] 2024-08-07T18:08:36.6555854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0193s] [ 62%] 2024-08-07T18:08:36.6558204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0088s] [ 62%] 2024-08-07T18:08:36.6560524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 62%] 2024-08-07T18:08:36.6562833Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0092s] [ 62%] 2024-08-07T18:08:36.6565177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 62%] 2024-08-07T18:08:36.6567534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 62%] 2024-08-07T18:08:36.6569835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 62%] 2024-08-07T18:08:36.6572140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0091s] [ 62%] 2024-08-07T18:08:36.6574470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 62%] 2024-08-07T18:08:36.6576804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 62%] 2024-08-07T18:08:36.6579093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 62%] 2024-08-07T18:08:36.6581418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0130s] [ 62%] 2024-08-07T18:08:36.6583776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0136s] [ 62%] 2024-08-07T18:08:36.6586142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0075s] [ 62%] 2024-08-07T18:08:36.6588444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 62%] 2024-08-07T18:08:36.6590739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0171s] [ 62%] 2024-08-07T18:08:36.6593061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0175s] [ 62%] 2024-08-07T18:08:36.6595611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0083s] [ 62%] 2024-08-07T18:08:36.6597950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 62%] 2024-08-07T18:08:36.6600378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 62%] 2024-08-07T18:08:36.6602738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 62%] 2024-08-07T18:08:36.6605025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 63%] 2024-08-07T18:08:36.6607336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 63%] 2024-08-07T18:08:36.6609636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0091s] [ 63%] 2024-08-07T18:08:36.6611946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 63%] 2024-08-07T18:08:36.6614269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0078s] [ 63%] 2024-08-07T18:08:36.6616552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 63%] 2024-08-07T18:08:36.6618917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0056s] [ 63%] 2024-08-07T18:08:36.6621301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 63%] 2024-08-07T18:08:36.6623647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 63%] 2024-08-07T18:08:36.6625962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0053s] [ 63%] 2024-08-07T18:08:36.6628255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 63%] 2024-08-07T18:08:36.6630570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 63%] 2024-08-07T18:08:36.6632915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 63%] 2024-08-07T18:08:36.6635275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 63%] 2024-08-07T18:08:36.6637621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 63%] 2024-08-07T18:08:36.6639916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 63%] 2024-08-07T18:08:36.6642200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 63%] 2024-08-07T18:08:36.6644490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 63%] 2024-08-07T18:08:36.6646789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 63%] 2024-08-07T18:08:36.6649094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 63%] 2024-08-07T18:08:36.6651382Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 63%] 2024-08-07T18:08:36.6653739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 63%] 2024-08-07T18:08:36.6656160Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 63%] 2024-08-07T18:08:36.6658458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 63%] 2024-08-07T18:08:36.6660758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 63%] 2024-08-07T18:08:36.6663058Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 63%] 2024-08-07T18:08:36.6665376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 63%] 2024-08-07T18:08:36.6667679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 63%] 2024-08-07T18:08:36.6670052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 63%] 2024-08-07T18:08:36.6672431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 63%] 2024-08-07T18:08:36.6674712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 63%] 2024-08-07T18:08:36.6676985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 63%] 2024-08-07T18:08:36.6679280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 63%] 2024-08-07T18:08:36.6681578Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 63%] 2024-08-07T18:08:36.6683867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 63%] 2024-08-07T18:08:36.6686157Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 63%] 2024-08-07T18:08:36.6688516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 63%] 2024-08-07T18:08:36.6690867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 63%] 2024-08-07T18:08:36.6693179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 63%] 2024-08-07T18:08:36.6695738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 63%] 2024-08-07T18:08:36.6698092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 63%] 2024-08-07T18:08:36.6700387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 63%] 2024-08-07T18:08:36.6702699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 63%] 2024-08-07T18:08:36.6705105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 63%] 2024-08-07T18:08:36.6707515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 63%] 2024-08-07T18:08:36.6709834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 63%] 2024-08-07T18:08:36.6712127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 63%] 2024-08-07T18:08:36.6714411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 63%] 2024-08-07T18:08:36.6716708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 63%] 2024-08-07T18:08:36.6718994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 63%] 2024-08-07T18:08:36.6721282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 63%] 2024-08-07T18:08:36.6723707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 63%] 2024-08-07T18:08:36.6726078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 63%] 2024-08-07T18:08:36.6728375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 63%] 2024-08-07T18:08:36.6730667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 63%] 2024-08-07T18:08:36.6732960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 63%] 2024-08-07T18:08:36.6735260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 63%] 2024-08-07T18:08:36.6737528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 63%] 2024-08-07T18:08:36.6739868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 63%] 2024-08-07T18:08:36.6742229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 63%] 2024-08-07T18:08:36.6744547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0061s] [ 63%] 2024-08-07T18:08:36.6746840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 63%] 2024-08-07T18:08:36.6749140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 63%] 2024-08-07T18:08:36.6751422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 63%] 2024-08-07T18:08:36.6753697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 63%] 2024-08-07T18:08:36.6755993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 63%] 2024-08-07T18:08:36.6758326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 63%] 2024-08-07T18:08:36.6760668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 63%] 2024-08-07T18:08:36.6762936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 63%] 2024-08-07T18:08:36.6765219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 63%] 2024-08-07T18:08:36.6767511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0055s] [ 63%] 2024-08-07T18:08:36.6769814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 63%] 2024-08-07T18:08:36.6772105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 63%] 2024-08-07T18:08:36.6774379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 63%] 2024-08-07T18:08:36.6776762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 63%] 2024-08-07T18:08:36.6779132Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 63%] 2024-08-07T18:08:36.6781474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 63%] 2024-08-07T18:08:36.6783777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 64%] 2024-08-07T18:08:36.6786055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 64%] 2024-08-07T18:08:36.6788338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 64%] 2024-08-07T18:08:36.6790625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 64%] 2024-08-07T18:08:36.6792954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 64%] 2024-08-07T18:08:36.6795563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 64%] 2024-08-07T18:08:36.6797880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 64%] 2024-08-07T18:08:36.6800174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 64%] 2024-08-07T18:08:36.6802480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 64%] 2024-08-07T18:08:36.6804791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 64%] 2024-08-07T18:08:36.6807100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 64%] 2024-08-07T18:08:36.6809371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 64%] 2024-08-07T18:08:36.6811757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 64%] 2024-08-07T18:08:36.6814136Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 64%] 2024-08-07T18:08:36.6816446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 64%] 2024-08-07T18:08:36.6818759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 64%] 2024-08-07T18:08:36.6821078Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 64%] 2024-08-07T18:08:36.6823403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 64%] 2024-08-07T18:08:36.6825695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 64%] 2024-08-07T18:08:36.6828040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 64%] 2024-08-07T18:08:36.6830414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 64%] 2024-08-07T18:08:36.6832699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 64%] 2024-08-07T18:08:36.6834964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 64%] 2024-08-07T18:08:36.6837258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 64%] 2024-08-07T18:08:36.6839591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 64%] 2024-08-07T18:08:36.6841878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 64%] 2024-08-07T18:08:36.6844173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 64%] 2024-08-07T18:08:36.6846493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0053s] [ 64%] 2024-08-07T18:08:36.6848839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 64%] 2024-08-07T18:08:36.6851138Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 64%] 2024-08-07T18:08:36.6853482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 64%] 2024-08-07T18:08:36.6855791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0062s] [ 64%] 2024-08-07T18:08:36.6858094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 64%] 2024-08-07T18:08:36.6860430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 64%] 2024-08-07T18:08:36.6862757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 64%] 2024-08-07T18:08:36.6865118Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 64%] 2024-08-07T18:08:36.6867398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 64%] 2024-08-07T18:08:36.6869660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 64%] 2024-08-07T18:08:36.6871947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 64%] 2024-08-07T18:08:36.6874242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 64%] 2024-08-07T18:08:36.6876548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 64%] 2024-08-07T18:08:36.6878835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 64%] 2024-08-07T18:08:36.6881168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 64%] 2024-08-07T18:08:36.6883489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0052s] [ 64%] 2024-08-07T18:08:36.6885777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 64%] 2024-08-07T18:08:36.6888060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 64%] 2024-08-07T18:08:36.6890390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 64%] 2024-08-07T18:08:36.6892688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0062s] [ 64%] 2024-08-07T18:08:36.6894983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 64%] 2024-08-07T18:08:36.6897537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 64%] 2024-08-07T18:08:36.6899903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 64%] 2024-08-07T18:08:36.6902257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 64%] 2024-08-07T18:08:36.6904536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 64%] 2024-08-07T18:08:36.6906801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 64%] 2024-08-07T18:08:36.6909103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 64%] 2024-08-07T18:08:36.6911386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 64%] 2024-08-07T18:08:36.6913688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 64%] 2024-08-07T18:08:36.6916085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 64%] 2024-08-07T18:08:36.6918442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 64%] 2024-08-07T18:08:36.6920747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 64%] 2024-08-07T18:08:36.6923108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 64%] 2024-08-07T18:08:36.6925441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 64%] 2024-08-07T18:08:36.6927744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 64%] 2024-08-07T18:08:36.6930047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 64%] 2024-08-07T18:08:36.6932360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 64%] 2024-08-07T18:08:36.6934712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 64%] 2024-08-07T18:08:36.6937086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 64%] 2024-08-07T18:08:36.6939378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 64%] 2024-08-07T18:08:36.6941677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 64%] 2024-08-07T18:08:36.6943964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 64%] 2024-08-07T18:08:36.6946263Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 64%] 2024-08-07T18:08:36.6948587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 64%] 2024-08-07T18:08:36.6950964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 64%] 2024-08-07T18:08:36.6953326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 64%] 2024-08-07T18:08:36.6955609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 64%] 2024-08-07T18:08:36.6957947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 64%] 2024-08-07T18:08:36.6960258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 64%] 2024-08-07T18:08:36.6962584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0064s] [ 65%] 2024-08-07T18:08:36.6964891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 65%] 2024-08-07T18:08:36.6967179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 65%] 2024-08-07T18:08:36.6969580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 65%] 2024-08-07T18:08:36.6971952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 65%] 2024-08-07T18:08:36.6974253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 65%] 2024-08-07T18:08:36.6976589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 65%] 2024-08-07T18:08:36.6978893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 65%] 2024-08-07T18:08:36.6981200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 65%] 2024-08-07T18:08:36.6983495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 65%] 2024-08-07T18:08:36.6985847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 65%] 2024-08-07T18:08:36.6988271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 65%] 2024-08-07T18:08:36.6990576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 65%] 2024-08-07T18:08:36.6992857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 65%] 2024-08-07T18:08:36.6995444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 65%] 2024-08-07T18:08:36.6997782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 65%] 2024-08-07T18:08:36.7000101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 65%] 2024-08-07T18:08:36.7002411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 65%] 2024-08-07T18:08:36.7004821Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 65%] 2024-08-07T18:08:36.7007278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 65%] 2024-08-07T18:08:36.7009582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 65%] 2024-08-07T18:08:36.7011877Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 65%] 2024-08-07T18:08:36.7014174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 65%] 2024-08-07T18:08:36.7016452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 65%] 2024-08-07T18:08:36.7018748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 65%] 2024-08-07T18:08:36.7021126Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 65%] 2024-08-07T18:08:36.7023556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 65%] 2024-08-07T18:08:36.7025866Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 65%] 2024-08-07T18:08:36.7028142Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 65%] 2024-08-07T18:08:36.7030443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 65%] 2024-08-07T18:08:36.7032761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0053s] [ 65%] 2024-08-07T18:08:36.7035057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 65%] 2024-08-07T18:08:36.7037356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 65%] 2024-08-07T18:08:36.7039740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 65%] 2024-08-07T18:08:36.7042099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0061s] [ 65%] 2024-08-07T18:08:36.7044407Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 65%] 2024-08-07T18:08:36.7046702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 65%] 2024-08-07T18:08:36.7049006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 65%] 2024-08-07T18:08:36.7051306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 65%] 2024-08-07T18:08:36.7053576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 65%] 2024-08-07T18:08:36.7055905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 65%] 2024-08-07T18:08:36.7058251Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 65%] 2024-08-07T18:08:36.7060535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 65%] 2024-08-07T18:08:36.7071659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 65%] 2024-08-07T18:08:36.7074022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 65%] 2024-08-07T18:08:36.7076339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 65%] 2024-08-07T18:08:36.7078663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 65%] 2024-08-07T18:08:36.7080978Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 65%] 2024-08-07T18:08:36.7083420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 65%] 2024-08-07T18:08:36.7085841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 65%] 2024-08-07T18:08:36.7088146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0062s] [ 65%] 2024-08-07T18:08:36.7090444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 65%] 2024-08-07T18:08:36.7092776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 65%] 2024-08-07T18:08:36.7095353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 65%] 2024-08-07T18:08:36.7097680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 65%] 2024-08-07T18:08:36.7100061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 65%] 2024-08-07T18:08:36.7102457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 65%] 2024-08-07T18:08:36.7104760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 65%] 2024-08-07T18:08:36.7107059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 65%] 2024-08-07T18:08:36.7109368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 65%] 2024-08-07T18:08:36.7111705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 65%] 2024-08-07T18:08:36.7113991Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 65%] 2024-08-07T18:08:36.7116299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 65%] 2024-08-07T18:08:36.7118697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 65%] 2024-08-07T18:08:36.7121093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 65%] 2024-08-07T18:08:36.7123454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 65%] 2024-08-07T18:08:36.7125742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 65%] 2024-08-07T18:08:36.7128064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 65%] 2024-08-07T18:08:36.7130380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 65%] 2024-08-07T18:08:36.7132674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 65%] 2024-08-07T18:08:36.7135008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 65%] 2024-08-07T18:08:36.7137329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 65%] 2024-08-07T18:08:36.7139625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 65%] 2024-08-07T18:08:36.7141919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 65%] 2024-08-07T18:08:36.7144240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 65%] 2024-08-07T18:08:36.7146551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 65%] 2024-08-07T18:08:36.7148845Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 65%] 2024-08-07T18:08:36.7151159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 65%] 2024-08-07T18:08:36.7153564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 66%] 2024-08-07T18:08:36.7155987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 66%] 2024-08-07T18:08:36.7158298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0085s] [ 66%] 2024-08-07T18:08:36.7160602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 66%] 2024-08-07T18:08:36.7162898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 66%] 2024-08-07T18:08:36.7165218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 66%] 2024-08-07T18:08:36.7167529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 66%] 2024-08-07T18:08:36.7169825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 66%] 2024-08-07T18:08:36.7172155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 66%] 2024-08-07T18:08:36.7174485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 66%] 2024-08-07T18:08:36.7176806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 66%] 2024-08-07T18:08:36.7179115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 66%] 2024-08-07T18:08:36.7181432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 66%] 2024-08-07T18:08:36.7183734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 66%] 2024-08-07T18:08:36.7186028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 66%] 2024-08-07T18:08:36.7188378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 66%] 2024-08-07T18:08:36.7190731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 66%] 2024-08-07T18:08:36.7193030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 66%] 2024-08-07T18:08:36.7195598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 66%] 2024-08-07T18:08:36.7197911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 66%] 2024-08-07T18:08:36.7200231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 66%] 2024-08-07T18:08:36.7202564Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 66%] 2024-08-07T18:08:36.7204873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 66%] 2024-08-07T18:08:36.7207247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 66%] 2024-08-07T18:08:36.7209612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 66%] 2024-08-07T18:08:36.7210825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 66%] 2024-08-07T18:08:36.7212038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 66%] 2024-08-07T18:08:36.7213278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 66%] 2024-08-07T18:08:36.7214484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 66%] 2024-08-07T18:08:36.7215708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 66%] 2024-08-07T18:08:36.7216995Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 66%] 2024-08-07T18:08:36.7218312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 66%] 2024-08-07T18:08:36.7219499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 66%] 2024-08-07T18:08:36.7220722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 66%] 2024-08-07T18:08:36.7221933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 66%] 2024-08-07T18:08:36.7223213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 66%] 2024-08-07T18:08:36.7224432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 66%] 2024-08-07T18:08:36.7225643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 66%] 2024-08-07T18:08:36.7226950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 66%] 2024-08-07T18:08:36.7228215Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 66%] 2024-08-07T18:08:36.7229427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 66%] 2024-08-07T18:08:36.7230623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 66%] 2024-08-07T18:08:36.7231826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 66%] 2024-08-07T18:08:36.7233061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 66%] 2024-08-07T18:08:36.7234255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 66%] 2024-08-07T18:08:36.7235530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 66%] 2024-08-07T18:08:36.7236808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 66%] 2024-08-07T18:08:36.7238036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 66%] 2024-08-07T18:08:36.7239234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 66%] 2024-08-07T18:08:36.7240464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 66%] 2024-08-07T18:08:36.7241675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 66%] 2024-08-07T18:08:36.7242884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 66%] 2024-08-07T18:08:36.7244105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 66%] 2024-08-07T18:08:36.7245365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 66%] 2024-08-07T18:08:36.7246646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 66%] 2024-08-07T18:08:36.7247850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 66%] 2024-08-07T18:08:36.7249060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 66%] 2024-08-07T18:08:36.7250264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 66%] 2024-08-07T18:08:36.7251496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 66%] 2024-08-07T18:08:36.7252713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 66%] 2024-08-07T18:08:36.7253899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 66%] 2024-08-07T18:08:36.7255167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 66%] 2024-08-07T18:08:36.7256436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 66%] 2024-08-07T18:08:36.7257663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 66%] 2024-08-07T18:08:36.7258870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 66%] 2024-08-07T18:08:36.7260107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 66%] 2024-08-07T18:08:36.7261313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 66%] 2024-08-07T18:08:36.7262547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 66%] 2024-08-07T18:08:36.7263799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 66%] 2024-08-07T18:08:36.7265064Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 66%] 2024-08-07T18:08:36.7266277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 66%] 2024-08-07T18:08:36.7267477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 66%] 2024-08-07T18:08:36.7268696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 66%] 2024-08-07T18:08:36.7269899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 66%] 2024-08-07T18:08:36.7271116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 66%] 2024-08-07T18:08:36.7272315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 66%] 2024-08-07T18:08:36.7273593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 66%] 2024-08-07T18:08:36.7274862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 67%] 2024-08-07T18:08:36.7276060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 67%] 2024-08-07T18:08:36.7277291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0055s] [ 67%] 2024-08-07T18:08:36.7278489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 67%] 2024-08-07T18:08:36.7279720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 67%] 2024-08-07T18:08:36.7280925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 67%] 2024-08-07T18:08:36.7282196Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 67%] 2024-08-07T18:08:36.7283467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 67%] 2024-08-07T18:08:36.7284682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 67%] 2024-08-07T18:08:36.7285874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 67%] 2024-08-07T18:08:36.7287102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 67%] 2024-08-07T18:08:36.7288324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 67%] 2024-08-07T18:08:36.7289526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 67%] 2024-08-07T18:08:36.7290738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 67%] 2024-08-07T18:08:36.7291985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 67%] 2024-08-07T18:08:36.7293277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 67%] 2024-08-07T18:08:36.7294475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 67%] 2024-08-07T18:08:36.7295976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0092s] [ 67%] 2024-08-07T18:08:36.7297268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0095s] [ 67%] 2024-08-07T18:08:36.7298517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 67%] 2024-08-07T18:08:36.7299734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 67%] 2024-08-07T18:08:36.7301034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0101s] [ 67%] 2024-08-07T18:08:36.7302350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0104s] [ 67%] 2024-08-07T18:08:36.7303568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 67%] 2024-08-07T18:08:36.7304807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 67%] 2024-08-07T18:08:36.7306017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 67%] 2024-08-07T18:08:36.7307279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 67%] 2024-08-07T18:08:36.7308482Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 67%] 2024-08-07T18:08:36.7309692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 67%] 2024-08-07T18:08:36.7310981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 67%] 2024-08-07T18:08:36.7312278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 67%] 2024-08-07T18:08:36.7313509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 67%] 2024-08-07T18:08:36.7314725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 67%] 2024-08-07T18:08:36.7315962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0099s] [ 67%] 2024-08-07T18:08:36.7317211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 67%] 2024-08-07T18:08:36.7318442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 67%] 2024-08-07T18:08:36.7319657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 67%] 2024-08-07T18:08:36.7320924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0109s] [ 67%] 2024-08-07T18:08:36.7322275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 67%] 2024-08-07T18:08:36.7323475Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 67%] 2024-08-07T18:08:36.7324731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 67%] 2024-08-07T18:08:36.7325945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 67%] 2024-08-07T18:08:36.7327202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 67%] 2024-08-07T18:08:36.7328399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 67%] 2024-08-07T18:08:36.7329676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 67%] 2024-08-07T18:08:36.7330939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 67%] 2024-08-07T18:08:36.7332152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 67%] 2024-08-07T18:08:36.7333375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 67%] 2024-08-07T18:08:36.7334598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 67%] 2024-08-07T18:08:36.7335838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0116s] [ 67%] 2024-08-07T18:08:36.7337077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0121s] [ 67%] 2024-08-07T18:08:36.7338313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 67%] 2024-08-07T18:08:36.7339577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 67%] 2024-08-07T18:08:36.7340867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0127s] [ 67%] 2024-08-07T18:08:36.7342087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0129s] [ 67%] 2024-08-07T18:08:36.7343311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 67%] 2024-08-07T18:08:36.7344559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 67%] 2024-08-07T18:08:36.7345759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 67%] 2024-08-07T18:08:36.7347011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 67%] 2024-08-07T18:08:36.7348264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 67%] 2024-08-07T18:08:36.7349545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 67%] 2024-08-07T18:08:36.7350750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 67%] 2024-08-07T18:08:36.7351983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 67%] 2024-08-07T18:08:36.7353197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 67%] 2024-08-07T18:08:36.7354423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 67%] 2024-08-07T18:08:36.7355650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0089s] [ 67%] 2024-08-07T18:08:36.7356880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 67%] 2024-08-07T18:08:36.7358154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 67%] 2024-08-07T18:08:36.7359421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 67%] 2024-08-07T18:08:36.7360651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0096s] [ 67%] 2024-08-07T18:08:36.7361870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0099s] [ 67%] 2024-08-07T18:08:36.7363110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 67%] 2024-08-07T18:08:36.7364328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 67%] 2024-08-07T18:08:36.7365530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 67%] 2024-08-07T18:08:36.7366828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 67%] 2024-08-07T18:08:36.7368089Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 67%] 2024-08-07T18:08:36.7369312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 67%] 2024-08-07T18:08:36.7370514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 67%] 2024-08-07T18:08:36.7371748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 68%] 2024-08-07T18:08:36.7372963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 68%] 2024-08-07T18:08:36.7374187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 68%] 2024-08-07T18:08:36.7375392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 68%] 2024-08-07T18:08:36.7376659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 68%] 2024-08-07T18:08:36.7377936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0053s] [ 68%] 2024-08-07T18:08:36.7379151Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0054s] [ 68%] 2024-08-07T18:08:36.7380376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 68%] 2024-08-07T18:08:36.7381596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 68%] 2024-08-07T18:08:36.7382834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 68%] 2024-08-07T18:08:36.7384050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 68%] 2024-08-07T18:08:36.7385314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 68%] 2024-08-07T18:08:36.7386579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 68%] 2024-08-07T18:08:36.7387789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 68%] 2024-08-07T18:08:36.7389029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 68%] 2024-08-07T18:08:36.7390222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 68%] 2024-08-07T18:08:36.7391459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 68%] 2024-08-07T18:08:36.7392698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 68%] 2024-08-07T18:08:36.7393924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 68%] 2024-08-07T18:08:36.7395473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0056s] [ 68%] 2024-08-07T18:08:36.7396809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 68%] 2024-08-07T18:08:36.7398036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 68%] 2024-08-07T18:08:36.7399240Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 68%] 2024-08-07T18:08:36.7400477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 68%] 2024-08-07T18:08:36.7401707Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 68%] 2024-08-07T18:08:36.7402938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0061s] [ 68%] 2024-08-07T18:08:36.7404218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 68%] 2024-08-07T18:08:36.7405514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 68%] 2024-08-07T18:08:36.7406718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 68%] 2024-08-07T18:08:36.7407947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 68%] 2024-08-07T18:08:36.7409156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 68%] 2024-08-07T18:08:36.7410373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 68%] 2024-08-07T18:08:36.7411598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 68%] 2024-08-07T18:08:36.7412801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 68%] 2024-08-07T18:08:36.7414075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 68%] 2024-08-07T18:08:36.7415332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0056s] [ 68%] 2024-08-07T18:08:36.7416565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 68%] 2024-08-07T18:08:36.7417786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 68%] 2024-08-07T18:08:36.7419018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 68%] 2024-08-07T18:08:36.7420239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 68%] 2024-08-07T18:08:36.7421457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 68%] 2024-08-07T18:08:36.7422723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 68%] 2024-08-07T18:08:36.7423993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 68%] 2024-08-07T18:08:36.7425274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 68%] 2024-08-07T18:08:36.7426478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 68%] 2024-08-07T18:08:36.7427721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 68%] 2024-08-07T18:08:36.7428931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 68%] 2024-08-07T18:08:36.7430223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 68%] 2024-08-07T18:08:36.7431428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 68%] 2024-08-07T18:08:36.7432680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 68%] 2024-08-07T18:08:36.7433976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 68%] 2024-08-07T18:08:36.7435173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 68%] 2024-08-07T18:08:36.7436405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 68%] 2024-08-07T18:08:36.7437635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 68%] 2024-08-07T18:08:36.7438864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 68%] 2024-08-07T18:08:36.7440067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 68%] 2024-08-07T18:08:36.7441298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 68%] 2024-08-07T18:08:36.7442546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0061s] [ 68%] 2024-08-07T18:08:36.7443808Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 68%] 2024-08-07T18:08:36.7445022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 68%] 2024-08-07T18:08:36.7446225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 68%] 2024-08-07T18:08:36.7447465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 68%] 2024-08-07T18:08:36.7448672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 68%] 2024-08-07T18:08:36.7449890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 68%] 2024-08-07T18:08:36.7451140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 68%] 2024-08-07T18:08:36.7452409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 68%] 2024-08-07T18:08:36.7453660Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 68%] 2024-08-07T18:08:36.7454870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 68%] 2024-08-07T18:08:36.7456098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 68%] 2024-08-07T18:08:36.7457329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 68%] 2024-08-07T18:08:36.7458559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 68%] 2024-08-07T18:08:36.7459765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 68%] 2024-08-07T18:08:36.7461047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 68%] 2024-08-07T18:08:36.7462307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 68%] 2024-08-07T18:08:36.7463537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 68%] 2024-08-07T18:08:36.7464727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 68%] 2024-08-07T18:08:36.7465928Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 68%] 2024-08-07T18:08:36.7467169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 69%] 2024-08-07T18:08:36.7468374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 69%] 2024-08-07T18:08:36.7469640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 69%] 2024-08-07T18:08:36.7470894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 69%] 2024-08-07T18:08:36.7472116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 69%] 2024-08-07T18:08:36.7473321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 69%] 2024-08-07T18:08:36.7474546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 69%] 2024-08-07T18:08:36.7475765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 69%] 2024-08-07T18:08:36.7476984Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 69%] 2024-08-07T18:08:36.7478202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 69%] 2024-08-07T18:08:36.7479511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 69%] 2024-08-07T18:08:36.7480798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 69%] 2024-08-07T18:08:36.7482004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0062s] [ 69%] 2024-08-07T18:08:36.7483231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 69%] 2024-08-07T18:08:36.7484432Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 69%] 2024-08-07T18:08:36.7485662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 69%] 2024-08-07T18:08:36.7486862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 69%] 2024-08-07T18:08:36.7488059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 69%] 2024-08-07T18:08:36.7489321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 69%] 2024-08-07T18:08:36.7490582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 69%] 2024-08-07T18:08:36.7491798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 69%] 2024-08-07T18:08:36.7493013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 69%] 2024-08-07T18:08:36.7494235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 69%] 2024-08-07T18:08:36.7495696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 69%] 2024-08-07T18:08:36.7496937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 69%] 2024-08-07T18:08:36.7498230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 69%] 2024-08-07T18:08:36.7499512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 69%] 2024-08-07T18:08:36.7500740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 69%] 2024-08-07T18:08:36.7501946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 69%] 2024-08-07T18:08:36.7503192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 69%] 2024-08-07T18:08:36.7504402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 69%] 2024-08-07T18:08:36.7505625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 69%] 2024-08-07T18:08:36.7506819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 69%] 2024-08-07T18:08:36.7508146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 69%] 2024-08-07T18:08:36.7510074Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 69%] 2024-08-07T18:08:36.7511280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 69%] 2024-08-07T18:08:36.7512498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 69%] 2024-08-07T18:08:36.7513712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 69%] 2024-08-07T18:08:36.7514940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 69%] 2024-08-07T18:08:36.7516144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 69%] 2024-08-07T18:08:36.7517416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 69%] 2024-08-07T18:08:36.7518700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 69%] 2024-08-07T18:08:36.7519904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 69%] 2024-08-07T18:08:36.7521134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 69%] 2024-08-07T18:08:36.7522386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 69%] 2024-08-07T18:08:36.7523631Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 69%] 2024-08-07T18:08:36.7524825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 69%] 2024-08-07T18:08:36.7526042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 69%] 2024-08-07T18:08:36.7527277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 69%] 2024-08-07T18:08:36.7528575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 69%] 2024-08-07T18:08:36.7529761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 69%] 2024-08-07T18:08:36.7530960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 69%] 2024-08-07T18:08:36.7532177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 69%] 2024-08-07T18:08:36.7533387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_1_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 69%] 2024-08-07T18:08:36.7534633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0110s] [ 69%] 2024-08-07T18:08:36.7535864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 69%] 2024-08-07T18:08:36.7537152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 69%] 2024-08-07T18:08:36.7538474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 69%] 2024-08-07T18:08:36.7539718Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0140s] [ 69%] 2024-08-07T18:08:36.7540960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0138s] [ 69%] 2024-08-07T18:08:36.7542195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0092s] [ 69%] 2024-08-07T18:08:36.7543444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 69%] 2024-08-07T18:08:36.7544655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0108s] [ 69%] 2024-08-07T18:08:36.7545941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 69%] 2024-08-07T18:08:36.7547212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0086s] [ 69%] 2024-08-07T18:08:36.7548479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 69%] 2024-08-07T18:08:36.7549697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0129s] [ 69%] 2024-08-07T18:08:36.7550943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0128s] [ 69%] 2024-08-07T18:08:36.7552168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 69%] 2024-08-07T18:08:36.7553406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 69%] 2024-08-07T18:08:36.7554627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0121s] [ 69%] 2024-08-07T18:08:36.7555906Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0125s] [ 69%] 2024-08-07T18:08:36.7557199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0092s] [ 69%] 2024-08-07T18:08:36.7558452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 69%] 2024-08-07T18:08:36.7559704Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0148s] [ 69%] 2024-08-07T18:08:36.7560942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0150s] [ 69%] 2024-08-07T18:08:36.7562194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0105s] [ 69%] 2024-08-07T18:08:36.7563422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0105s] [ 70%] 2024-08-07T18:08:36.7564698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0119s] [ 70%] 2024-08-07T18:08:36.7565971Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0120s] [ 70%] 2024-08-07T18:08:36.7567188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0095s] [ 70%] 2024-08-07T18:08:36.7568446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 70%] 2024-08-07T18:08:36.7569672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0141s] [ 70%] 2024-08-07T18:08:36.7570922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0143s] [ 70%] 2024-08-07T18:08:36.7572137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0107s] [ 70%] 2024-08-07T18:08:36.7573375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0107s] [ 70%] 2024-08-07T18:08:36.7574655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0140s] [ 70%] 2024-08-07T18:08:36.7575966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0144s] [ 70%] 2024-08-07T18:08:36.7577181Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0108s] [ 70%] 2024-08-07T18:08:36.7578428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0108s] [ 70%] 2024-08-07T18:08:36.7579681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0168s] [ 70%] 2024-08-07T18:08:36.7580907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0168s] [ 70%] 2024-08-07T18:08:36.7582145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0118s] [ 70%] 2024-08-07T18:08:36.7583424Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0119s] [ 70%] 2024-08-07T18:08:36.7584713Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0136s] [ 70%] 2024-08-07T18:08:36.7585931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0137s] [ 70%] 2024-08-07T18:08:36.7587162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0109s] [ 70%] 2024-08-07T18:08:36.7588409Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0108s] [ 70%] 2024-08-07T18:08:36.7589635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0158s] [ 70%] 2024-08-07T18:08:36.7590878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0157s] [ 70%] 2024-08-07T18:08:36.7592146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0119s] [ 70%] 2024-08-07T18:08:36.7593448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0119s] [ 70%] 2024-08-07T18:08:36.7594659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0106s] [ 70%] 2024-08-07T18:08:36.7596169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 70%] 2024-08-07T18:08:36.7597414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 70%] 2024-08-07T18:08:36.7598685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 70%] 2024-08-07T18:08:36.7599914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0133s] [ 70%] 2024-08-07T18:08:36.7601143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0124s] [ 70%] 2024-08-07T18:08:36.7602461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0090s] [ 70%] 2024-08-07T18:08:36.7603768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 70%] 2024-08-07T18:08:36.7604998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0101s] [ 70%] 2024-08-07T18:08:36.7606216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0101s] [ 70%] 2024-08-07T18:08:36.7607451Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 70%] 2024-08-07T18:08:36.7608702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 70%] 2024-08-07T18:08:36.7609937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0123s] [ 70%] 2024-08-07T18:08:36.7611222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0121s] [ 70%] 2024-08-07T18:08:36.7612522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 70%] 2024-08-07T18:08:36.7613756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 70%] 2024-08-07T18:08:36.7614980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0174s] [ 70%] 2024-08-07T18:08:36.7616234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0182s] [ 70%] 2024-08-07T18:08:36.7617467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0116s] [ 70%] 2024-08-07T18:08:36.7618731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0118s] [ 70%] 2024-08-07T18:08:36.7619954Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0230s] [ 70%] 2024-08-07T18:08:36.7621249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0232s] [ 70%] 2024-08-07T18:08:36.7622570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0134s] [ 70%] 2024-08-07T18:08:36.7623802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0123s] [ 70%] 2024-08-07T18:08:36.7625037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0142s] [ 70%] 2024-08-07T18:08:36.7626273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0142s] [ 70%] 2024-08-07T18:08:36.7627513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0116s] [ 70%] 2024-08-07T18:08:36.7628749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0118s] [ 70%] 2024-08-07T18:08:36.7630031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0172s] [ 70%] 2024-08-07T18:08:36.7631307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0172s] [ 70%] 2024-08-07T18:08:36.7632544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0130s] [ 70%] 2024-08-07T18:08:36.7633765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0135s] [ 70%] 2024-08-07T18:08:36.7634996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0188s] [ 70%] 2024-08-07T18:08:36.7636254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0195s] [ 70%] 2024-08-07T18:08:36.7637478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0131s] [ 70%] 2024-08-07T18:08:36.7638749Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0127s] [ 70%] 2024-08-07T18:08:36.7640019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0242s] [ 70%] 2024-08-07T18:08:36.7641323Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0246s] [ 70%] 2024-08-07T18:08:36.7642546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0152s] [ 70%] 2024-08-07T18:08:36.7643791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0149s] [ 70%] 2024-08-07T18:08:36.7645014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0158s] [ 70%] 2024-08-07T18:08:36.7646242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0154s] [ 70%] 2024-08-07T18:08:36.7647473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0130s] [ 70%] 2024-08-07T18:08:36.7648772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0132s] [ 70%] 2024-08-07T18:08:36.7650067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0185s] [ 70%] 2024-08-07T18:08:36.7651292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0184s] [ 70%] 2024-08-07T18:08:36.7652529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0131s] [ 70%] 2024-08-07T18:08:36.7653757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0149s] [ 70%] 2024-08-07T18:08:36.7655002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0220s] [ 70%] 2024-08-07T18:08:36.7656230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0228s] [ 70%] 2024-08-07T18:08:36.7657447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0154s] [ 70%] 2024-08-07T18:08:36.7658759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0154s] [ 70%] 2024-08-07T18:08:36.7660037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0272s] [ 70%] 2024-08-07T18:08:36.7661291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0273s] [ 71%] 2024-08-07T18:08:36.7662510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0171s] [ 71%] 2024-08-07T18:08:36.7663764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0171s] [ 71%] 2024-08-07T18:08:36.7664986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0180s] [ 71%] 2024-08-07T18:08:36.7666223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0182s] [ 71%] 2024-08-07T18:08:36.7667481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0153s] [ 71%] 2024-08-07T18:08:36.7668780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0153s] [ 71%] 2024-08-07T18:08:36.7670017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0209s] [ 71%] 2024-08-07T18:08:36.7671239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0210s] [ 71%] 2024-08-07T18:08:36.7672546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0170s] [ 71%] 2024-08-07T18:08:36.7673779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0170s] [ 71%] 2024-08-07T18:08:36.7675018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0166s] [ 71%] 2024-08-07T18:08:36.7676239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0169s] [ 71%] 2024-08-07T18:08:36.7677520Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0110s] [ 71%] 2024-08-07T18:08:36.7678814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0108s] [ 71%] 2024-08-07T18:08:36.7680037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0220s] [ 71%] 2024-08-07T18:08:36.7681281Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0222s] [ 71%] 2024-08-07T18:08:36.7682516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0126s] [ 71%] 2024-08-07T18:08:36.7683789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0128s] [ 71%] 2024-08-07T18:08:36.7685000Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0136s] [ 71%] 2024-08-07T18:08:36.7686285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0138s] [ 71%] 2024-08-07T18:08:36.7687600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0115s] [ 71%] 2024-08-07T18:08:36.7688856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0114s] [ 71%] 2024-08-07T18:08:36.7690068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0167s] [ 71%] 2024-08-07T18:08:36.7691316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0164s] [ 71%] 2024-08-07T18:08:36.7692543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0130s] [ 71%] 2024-08-07T18:08:36.7693766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0130s] [ 71%] 2024-08-07T18:08:36.7694994Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 71%] 2024-08-07T18:08:36.7696517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 71%] 2024-08-07T18:08:36.7697819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 71%] 2024-08-07T18:08:36.7699054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 71%] 2024-08-07T18:08:36.7700295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0080s] [ 71%] 2024-08-07T18:08:36.7701516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 71%] 2024-08-07T18:08:36.7702765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 71%] 2024-08-07T18:08:36.7703989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 71%] 2024-08-07T18:08:36.7705259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 71%] 2024-08-07T18:08:36.7706553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 71%] 2024-08-07T18:08:36.7707761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 71%] 2024-08-07T18:08:36.7709014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 71%] 2024-08-07T18:08:36.7710234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 71%] 2024-08-07T18:08:36.7711479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 71%] 2024-08-07T18:08:36.7712693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 71%] 2024-08-07T18:08:36.7713925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 71%] 2024-08-07T18:08:36.7715191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0070s] [ 71%] 2024-08-07T18:08:36.7716485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 71%] 2024-08-07T18:08:36.7717720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 71%] 2024-08-07T18:08:36.7718966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 71%] 2024-08-07T18:08:36.7720210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 71%] 2024-08-07T18:08:36.7721444Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 71%] 2024-08-07T18:08:36.7722724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 71%] 2024-08-07T18:08:36.7724006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 71%] 2024-08-07T18:08:36.7725291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0073s] [ 71%] 2024-08-07T18:08:36.7726509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 71%] 2024-08-07T18:08:36.7727716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 71%] 2024-08-07T18:08:36.7728979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 71%] 2024-08-07T18:08:36.7730205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0091s] [ 71%] 2024-08-07T18:08:36.7731445Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 71%] 2024-08-07T18:08:36.7732658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 71%] 2024-08-07T18:08:36.7733947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 71%] 2024-08-07T18:08:36.7735216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 71%] 2024-08-07T18:08:36.7736458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 71%] 2024-08-07T18:08:36.7737676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 71%] 2024-08-07T18:08:36.7738929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 71%] 2024-08-07T18:08:36.7740189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0088s] [ 71%] 2024-08-07T18:08:36.7741417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0096s] [ 71%] 2024-08-07T18:08:36.7742694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 71%] 2024-08-07T18:08:36.7743976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 71%] 2024-08-07T18:08:36.7745201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0084s] [ 71%] 2024-08-07T18:08:36.7746429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 71%] 2024-08-07T18:08:36.7747663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 71%] 2024-08-07T18:08:36.7748907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 71%] 2024-08-07T18:08:36.7750120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0095s] [ 71%] 2024-08-07T18:08:36.7751358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0092s] [ 71%] 2024-08-07T18:08:36.7752627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 71%] 2024-08-07T18:08:36.7753970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 71%] 2024-08-07T18:08:36.7755191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 71%] 2024-08-07T18:08:36.7756428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 71%] 2024-08-07T18:08:36.7757644Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 72%] 2024-08-07T18:08:36.7758923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 72%] 2024-08-07T18:08:36.7760129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 72%] 2024-08-07T18:08:36.7761360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 72%] 2024-08-07T18:08:36.7762653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 72%] 2024-08-07T18:08:36.7763927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 72%] 2024-08-07T18:08:36.7765156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 72%] 2024-08-07T18:08:36.7766369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 72%] 2024-08-07T18:08:36.7767606Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 72%] 2024-08-07T18:08:36.7768846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 72%] 2024-08-07T18:08:36.7770075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 72%] 2024-08-07T18:08:36.7771341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 72%] 2024-08-07T18:08:36.7772655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 72%] 2024-08-07T18:08:36.7773895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 72%] 2024-08-07T18:08:36.7775122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0310s] [ 72%] 2024-08-07T18:08:36.7776380Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0326s] [ 72%] 2024-08-07T18:08:36.7777613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0188s] [ 72%] 2024-08-07T18:08:36.7778871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0191s] [ 72%] 2024-08-07T18:08:36.7780098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0415s] [ 72%] 2024-08-07T18:08:36.7781405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0426s] [ 72%] 2024-08-07T18:08:36.7782691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0228s] [ 72%] 2024-08-07T18:08:36.7783922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0221s] [ 72%] 2024-08-07T18:08:36.7785166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0217s] [ 72%] 2024-08-07T18:08:36.7786398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0218s] [ 72%] 2024-08-07T18:08:36.7787635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0189s] [ 72%] 2024-08-07T18:08:36.7788874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0190s] [ 72%] 2024-08-07T18:08:36.7790178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0261s] [ 72%] 2024-08-07T18:08:36.7791459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0261s] [ 72%] 2024-08-07T18:08:36.7792702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0223s] [ 72%] 2024-08-07T18:08:36.7793930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0224s] [ 72%] 2024-08-07T18:08:36.7795428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0334s] [ 72%] 2024-08-07T18:08:36.7796706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0349s] [ 72%] 2024-08-07T18:08:36.7797927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0217s] [ 72%] 2024-08-07T18:08:36.7799194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0215s] [ 72%] 2024-08-07T18:08:36.7800502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0439s] [ 72%] 2024-08-07T18:08:36.7801828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0452s] [ 72%] 2024-08-07T18:08:36.7803057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0253s] [ 72%] 2024-08-07T18:08:36.7804317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0253s] [ 72%] 2024-08-07T18:08:36.7805541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0236s] [ 72%] 2024-08-07T18:08:36.7806768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0238s] [ 72%] 2024-08-07T18:08:36.7808001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0214s] [ 72%] 2024-08-07T18:08:36.7809311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0211s] [ 72%] 2024-08-07T18:08:36.7810624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0287s] [ 72%] 2024-08-07T18:08:36.7811847Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0288s] [ 72%] 2024-08-07T18:08:36.7813088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0249s] [ 72%] 2024-08-07T18:08:36.7814322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0247s] [ 72%] 2024-08-07T18:08:36.7815577Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0394s] [ 72%] 2024-08-07T18:08:36.7816809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0412s] [ 72%] 2024-08-07T18:08:36.7818035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0261s] [ 72%] 2024-08-07T18:08:36.7819353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0259s] [ 72%] 2024-08-07T18:08:36.7820638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0500s] [ 72%] 2024-08-07T18:08:36.7821891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0507s] [ 72%] 2024-08-07T18:08:36.7823166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0300s] [ 72%] 2024-08-07T18:08:36.7824434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0294s] [ 72%] 2024-08-07T18:08:36.7825665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0283s] [ 72%] 2024-08-07T18:08:36.7826894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0285s] [ 72%] 2024-08-07T18:08:36.7828162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0252s] [ 72%] 2024-08-07T18:08:36.7829465Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0251s] [ 72%] 2024-08-07T18:08:36.7830727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0330s] [ 72%] 2024-08-07T18:08:36.7831941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0329s] [ 72%] 2024-08-07T18:08:36.7833186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0292s] [ 72%] 2024-08-07T18:08:36.7834419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0287s] [ 72%] 2024-08-07T18:08:36.7835659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0289s] [ 72%] 2024-08-07T18:08:36.7836940Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0303s] [ 72%] 2024-08-07T18:08:36.7838235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0182s] [ 72%] 2024-08-07T18:08:36.7839485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0180s] [ 72%] 2024-08-07T18:08:36.7840715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0404s] [ 72%] 2024-08-07T18:08:36.7841975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0409s] [ 72%] 2024-08-07T18:08:36.7843214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0213s] [ 72%] 2024-08-07T18:08:36.7844472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0215s] [ 72%] 2024-08-07T18:08:36.7845688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0207s] [ 72%] 2024-08-07T18:08:36.7846974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0205s] [ 72%] 2024-08-07T18:08:36.7848248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0184s] [ 72%] 2024-08-07T18:08:36.7849511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0186s] [ 72%] 2024-08-07T18:08:36.7850736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0253s] [ 72%] 2024-08-07T18:08:36.7851967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0255s] [ 72%] 2024-08-07T18:08:36.7853212Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0210s] [ 72%] 2024-08-07T18:08:36.7854440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0211s] [ 72%] 2024-08-07T18:08:36.7855724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0073s] [ 73%] 2024-08-07T18:08:36.7857020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 73%] 2024-08-07T18:08:36.7858260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 73%] 2024-08-07T18:08:36.7859522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 73%] 2024-08-07T18:08:36.7860757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0091s] [ 73%] 2024-08-07T18:08:36.7861998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 73%] 2024-08-07T18:08:36.7863223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 73%] 2024-08-07T18:08:36.7864479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 73%] 2024-08-07T18:08:36.7865732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0079s] [ 73%] 2024-08-07T18:08:36.7867021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 73%] 2024-08-07T18:08:36.7868234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 73%] 2024-08-07T18:08:36.7869489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 73%] 2024-08-07T18:08:36.7870712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0093s] [ 73%] 2024-08-07T18:08:36.7871963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0094s] [ 73%] 2024-08-07T18:08:36.7873182Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 73%] 2024-08-07T18:08:36.7874455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 73%] 2024-08-07T18:08:36.7875750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 73%] 2024-08-07T18:08:36.7876975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 73%] 2024-08-07T18:08:36.7878218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 73%] 2024-08-07T18:08:36.7879478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 73%] 2024-08-07T18:08:36.7880724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0096s] [ 73%] 2024-08-07T18:08:36.7881950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0094s] [ 73%] 2024-08-07T18:08:36.7883188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 73%] 2024-08-07T18:08:36.7884459Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 73%] 2024-08-07T18:08:36.7885724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0085s] [ 73%] 2024-08-07T18:08:36.7886961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 73%] 2024-08-07T18:08:36.7888183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0073s] [ 73%] 2024-08-07T18:08:36.7889446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 73%] 2024-08-07T18:08:36.7890672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0099s] [ 73%] 2024-08-07T18:08:36.7891913Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0099s] [ 73%] 2024-08-07T18:08:36.7893179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0080s] [ 73%] 2024-08-07T18:08:36.7894478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 73%] 2024-08-07T18:08:36.7895945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0091s] [ 73%] 2024-08-07T18:08:36.7897195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 73%] 2024-08-07T18:08:36.7898453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0075s] [ 73%] 2024-08-07T18:08:36.7899689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 73%] 2024-08-07T18:08:36.7900934Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0113s] [ 73%] 2024-08-07T18:08:36.7902163Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0110s] [ 73%] 2024-08-07T18:08:36.7903477Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 73%] 2024-08-07T18:08:36.7904779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 73%] 2024-08-07T18:08:36.7906013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0098s] [ 73%] 2024-08-07T18:08:36.7907262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 73%] 2024-08-07T18:08:36.7908484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0075s] [ 73%] 2024-08-07T18:08:36.7909742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 73%] 2024-08-07T18:08:36.7910967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0112s] [ 73%] 2024-08-07T18:08:36.7912260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0110s] [ 73%] 2024-08-07T18:08:36.7913552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0088s] [ 73%] 2024-08-07T18:08:36.7914796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 73%] 2024-08-07T18:08:36.7916008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0073s] [ 73%] 2024-08-07T18:08:36.7917264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 73%] 2024-08-07T18:08:36.7918480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 73%] 2024-08-07T18:08:36.7919735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 73%] 2024-08-07T18:08:36.7920956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0089s] [ 73%] 2024-08-07T18:08:36.7922231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 73%] 2024-08-07T18:08:36.7923569Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 73%] 2024-08-07T18:08:36.7924798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 73%] 2024-08-07T18:08:36.7926028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0075s] [ 73%] 2024-08-07T18:08:36.7927260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 73%] 2024-08-07T18:08:36.7928484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 73%] 2024-08-07T18:08:36.7929708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 73%] 2024-08-07T18:08:36.7930943Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0092s] [ 73%] 2024-08-07T18:08:36.7932226Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 73%] 2024-08-07T18:08:36.7933494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 73%] 2024-08-07T18:08:36.7934734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 73%] 2024-08-07T18:08:36.7935962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 73%] 2024-08-07T18:08:36.7937209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 73%] 2024-08-07T18:08:36.7938425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 73%] 2024-08-07T18:08:36.7939667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 73%] 2024-08-07T18:08:36.7940939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 73%] 2024-08-07T18:08:36.7942242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 73%] 2024-08-07T18:08:36.7943461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 73%] 2024-08-07T18:08:36.7944689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 73%] 2024-08-07T18:08:36.7945937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 73%] 2024-08-07T18:08:36.7947161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 73%] 2024-08-07T18:08:36.7948389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 73%] 2024-08-07T18:08:36.7949615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 73%] 2024-08-07T18:08:36.7950900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0085s] [ 73%] 2024-08-07T18:08:36.7952174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 74%] 2024-08-07T18:08:36.7953413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 74%] 2024-08-07T18:08:36.7954638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 74%] 2024-08-07T18:08:36.7955858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0073s] [ 74%] 2024-08-07T18:08:36.7957100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 74%] 2024-08-07T18:08:36.7958317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 74%] 2024-08-07T18:08:36.7959608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 74%] 2024-08-07T18:08:36.7960885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 74%] 2024-08-07T18:08:36.7962129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 74%] 2024-08-07T18:08:36.7963345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 74%] 2024-08-07T18:08:36.7964595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 74%] 2024-08-07T18:08:36.7965815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 74%] 2024-08-07T18:08:36.7967032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 74%] 2024-08-07T18:08:36.7968262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 74%] 2024-08-07T18:08:36.7969527Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 74%] 2024-08-07T18:08:36.7970826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0091s] [ 74%] 2024-08-07T18:08:36.7972042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 74%] 2024-08-07T18:08:36.7973277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 74%] 2024-08-07T18:08:36.7974510Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 74%] 2024-08-07T18:08:36.7975743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 74%] 2024-08-07T18:08:36.7976963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 74%] 2024-08-07T18:08:36.7978224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 74%] 2024-08-07T18:08:36.7979517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 74%] 2024-08-07T18:08:36.7980751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0097s] [ 74%] 2024-08-07T18:08:36.7982001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0096s] [ 74%] 2024-08-07T18:08:36.7983233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 74%] 2024-08-07T18:08:36.7984488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 74%] 2024-08-07T18:08:36.7985690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0083s] [ 74%] 2024-08-07T18:08:36.7986916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 74%] 2024-08-07T18:08:36.7988167Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0073s] [ 74%] 2024-08-07T18:08:36.7989434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 74%] 2024-08-07T18:08:36.7990677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0095s] [ 74%] 2024-08-07T18:08:36.7991900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0095s] [ 74%] 2024-08-07T18:08:36.7993141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 74%] 2024-08-07T18:08:36.7994358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 74%] 2024-08-07T18:08:36.7995807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 74%] 2024-08-07T18:08:36.7997116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 74%] 2024-08-07T18:08:36.7998431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 74%] 2024-08-07T18:08:36.7999645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 74%] 2024-08-07T18:08:36.8000878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0079s] [ 74%] 2024-08-07T18:08:36.8002120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 74%] 2024-08-07T18:08:36.8003340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 74%] 2024-08-07T18:08:36.8004576Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 74%] 2024-08-07T18:08:36.8005780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 74%] 2024-08-07T18:08:36.8007065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 74%] 2024-08-07T18:08:36.8008340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 74%] 2024-08-07T18:08:36.8009573Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 74%] 2024-08-07T18:08:36.8010791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 74%] 2024-08-07T18:08:36.8012012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 74%] 2024-08-07T18:08:36.8013241Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 74%] 2024-08-07T18:08:36.8014452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_128_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 74%] 2024-08-07T18:08:36.8015739Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0170s] [ 74%] 2024-08-07T18:08:36.8017019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0174s] [ 74%] 2024-08-07T18:08:36.8018260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0121s] [ 74%] 2024-08-07T18:08:36.8019486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0118s] [ 74%] 2024-08-07T18:08:36.8020756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0228s] [ 74%] 2024-08-07T18:08:36.8021992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0225s] [ 74%] 2024-08-07T18:08:36.8023255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0140s] [ 74%] 2024-08-07T18:08:36.8024504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0135s] [ 74%] 2024-08-07T18:08:36.8025763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0168s] [ 74%] 2024-08-07T18:08:36.8027061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0171s] [ 74%] 2024-08-07T18:08:36.8028292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0124s] [ 74%] 2024-08-07T18:08:36.8029532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0120s] [ 74%] 2024-08-07T18:08:36.8030776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0226s] [ 74%] 2024-08-07T18:08:36.8032034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0221s] [ 74%] 2024-08-07T18:08:36.8033253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0137s] [ 74%] 2024-08-07T18:08:36.8034517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0138s] [ 74%] 2024-08-07T18:08:36.8035807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0181s] [ 74%] 2024-08-07T18:08:36.8037034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0190s] [ 74%] 2024-08-07T18:08:36.8038277Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0133s] [ 74%] 2024-08-07T18:08:36.8039513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0135s] [ 74%] 2024-08-07T18:08:36.8040780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0237s] [ 74%] 2024-08-07T18:08:36.8042016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0239s] [ 74%] 2024-08-07T18:08:36.8043262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0152s] [ 74%] 2024-08-07T18:08:36.8044539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0156s] [ 74%] 2024-08-07T18:08:36.8045816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0183s] [ 74%] 2024-08-07T18:08:36.8047061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0191s] [ 74%] 2024-08-07T18:08:36.8048279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0134s] [ 75%] 2024-08-07T18:08:36.8049525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0137s] [ 75%] 2024-08-07T18:08:36.8050767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0236s] [ 75%] 2024-08-07T18:08:36.8052011Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0240s] [ 75%] 2024-08-07T18:08:36.8053326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0157s] [ 75%] 2024-08-07T18:08:36.8054626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0150s] [ 75%] 2024-08-07T18:08:36.8055850Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0217s] [ 75%] 2024-08-07T18:08:36.8057081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0228s] [ 75%] 2024-08-07T18:08:36.8058329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0155s] [ 75%] 2024-08-07T18:08:36.8059570Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0159s] [ 75%] 2024-08-07T18:08:36.8060846Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0270s] [ 75%] 2024-08-07T18:08:36.8062068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0270s] [ 75%] 2024-08-07T18:08:36.8063361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0176s] [ 75%] 2024-08-07T18:08:36.8064647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0178s] [ 75%] 2024-08-07T18:08:36.8065881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0215s] [ 75%] 2024-08-07T18:08:36.8067110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0218s] [ 75%] 2024-08-07T18:08:36.8068341Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0162s] [ 75%] 2024-08-07T18:08:36.8069581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0163s] [ 75%] 2024-08-07T18:08:36.8070822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0267s] [ 75%] 2024-08-07T18:08:36.8072161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0268s] [ 75%] 2024-08-07T18:08:36.8073441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0179s] [ 75%] 2024-08-07T18:08:36.8074685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0179s] [ 75%] 2024-08-07T18:08:36.8075901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0160s] [ 75%] 2024-08-07T18:08:36.8077159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0164s] [ 75%] 2024-08-07T18:08:36.8078384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0115s] [ 75%] 2024-08-07T18:08:36.8079611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0114s] [ 75%] 2024-08-07T18:08:36.8080871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0213s] [ 75%] 2024-08-07T18:08:36.8082153Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0218s] [ 75%] 2024-08-07T18:08:36.8083452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0129s] [ 75%] 2024-08-07T18:08:36.8084680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0132s] [ 75%] 2024-08-07T18:08:36.8085918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0161s] [ 75%] 2024-08-07T18:08:36.8087146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0162s] [ 75%] 2024-08-07T18:08:36.8088376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0116s] [ 75%] 2024-08-07T18:08:36.8089595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0116s] [ 75%] 2024-08-07T18:08:36.8090919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0210s] [ 75%] 2024-08-07T18:08:36.8092213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0214s] [ 75%] 2024-08-07T18:08:36.8093427Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0131s] [ 75%] 2024-08-07T18:08:36.8094664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0136s] [ 75%] 2024-08-07T18:08:36.8096135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0288s] [ 75%] 2024-08-07T18:08:36.8097417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0307s] [ 75%] 2024-08-07T18:08:36.8098640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0183s] [ 75%] 2024-08-07T18:08:36.8099889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0179s] [ 75%] 2024-08-07T18:08:36.8101219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0393s] [ 75%] 2024-08-07T18:08:36.8102591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0397s] [ 75%] 2024-08-07T18:08:36.8103839Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0210s] [ 75%] 2024-08-07T18:08:36.8105069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0211s] [ 75%] 2024-08-07T18:08:36.8106316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0254s] [ 75%] 2024-08-07T18:08:36.8107543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0261s] [ 75%] 2024-08-07T18:08:36.8108788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0183s] [ 75%] 2024-08-07T18:08:36.8110084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0183s] [ 75%] 2024-08-07T18:08:36.8111408Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0341s] [ 75%] 2024-08-07T18:08:36.8112635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0341s] [ 75%] 2024-08-07T18:08:36.8113859Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0214s] [ 75%] 2024-08-07T18:08:36.8115162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0214s] [ 75%] 2024-08-07T18:08:36.8116387Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0309s] [ 75%] 2024-08-07T18:08:36.8117635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0322s] [ 75%] 2024-08-07T18:08:36.8118873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0202s] [ 75%] 2024-08-07T18:08:36.8120187Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0202s] [ 75%] 2024-08-07T18:08:36.8121494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0415s] [ 75%] 2024-08-07T18:08:36.8122781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0420s] [ 75%] 2024-08-07T18:08:36.8124022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0237s] [ 75%] 2024-08-07T18:08:36.8125264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0236s] [ 75%] 2024-08-07T18:08:36.8126498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0283s] [ 75%] 2024-08-07T18:08:36.8127722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0285s] [ 75%] 2024-08-07T18:08:36.8129007Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0210s] [ 75%] 2024-08-07T18:08:36.8130288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0203s] [ 75%] 2024-08-07T18:08:36.8131550Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0363s] [ 75%] 2024-08-07T18:08:36.8132774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0375s] [ 75%] 2024-08-07T18:08:36.8134023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0247s] [ 75%] 2024-08-07T18:08:36.8135284Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0248s] [ 75%] 2024-08-07T18:08:36.8136537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0368s] [ 75%] 2024-08-07T18:08:36.8137759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0389s] [ 75%] 2024-08-07T18:08:36.8139034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0249s] [ 75%] 2024-08-07T18:08:36.8140349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0249s] [ 75%] 2024-08-07T18:08:36.8141595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0474s] [ 75%] 2024-08-07T18:08:36.8142855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0484s] [ 75%] 2024-08-07T18:08:36.8144095Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0289s] [ 75%] 2024-08-07T18:08:36.8145343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0289s] [ 75%] 2024-08-07T18:08:36.8146559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0341s] [ 76%] 2024-08-07T18:08:36.8147852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0343s] [ 76%] 2024-08-07T18:08:36.8149122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0253s] [ 76%] 2024-08-07T18:08:36.8150345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0250s] [ 76%] 2024-08-07T18:08:36.8151594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0421s] [ 76%] 2024-08-07T18:08:36.8152827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0408s] [ 76%] 2024-08-07T18:08:36.8154076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0270s] [ 76%] 2024-08-07T18:08:36.8155308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0272s] [ 76%] 2024-08-07T18:08:36.8156551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0264s] [ 76%] 2024-08-07T18:08:36.8157824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0277s] [ 76%] 2024-08-07T18:08:36.8159119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0168s] [ 76%] 2024-08-07T18:08:36.8160344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0168s] [ 76%] 2024-08-07T18:08:36.8161591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0375s] [ 76%] 2024-08-07T18:08:36.8162851Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0380s] [ 76%] 2024-08-07T18:08:36.8164083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0202s] [ 76%] 2024-08-07T18:08:36.8165339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0198s] [ 76%] 2024-08-07T18:08:36.8166604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0240s] [ 76%] 2024-08-07T18:08:36.8167898Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0242s] [ 76%] 2024-08-07T18:08:36.8169119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0177s] [ 76%] 2024-08-07T18:08:36.8170363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0178s] [ 76%] 2024-08-07T18:08:36.8171649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0324s] [ 76%] 2024-08-07T18:08:36.8172889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0333s] [ 76%] 2024-08-07T18:08:36.8174127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0206s] [ 76%] 2024-08-07T18:08:36.8175363Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0213s] [ 76%] 2024-08-07T18:08:36.8176648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0086s] [ 76%] 2024-08-07T18:08:36.8177921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 76%] 2024-08-07T18:08:36.8179152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0068s] [ 76%] 2024-08-07T18:08:36.8180376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 76%] 2024-08-07T18:08:36.8181635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0108s] [ 76%] 2024-08-07T18:08:36.8182861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 76%] 2024-08-07T18:08:36.8184076Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 76%] 2024-08-07T18:08:36.8185369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 76%] 2024-08-07T18:08:36.8186659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 76%] 2024-08-07T18:08:36.8187878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 76%] 2024-08-07T18:08:36.8189083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 76%] 2024-08-07T18:08:36.8190317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 76%] 2024-08-07T18:08:36.8191597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0109s] [ 76%] 2024-08-07T18:08:36.8192837Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 76%] 2024-08-07T18:08:36.8194045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0080s] [ 76%] 2024-08-07T18:08:36.8195607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 76%] 2024-08-07T18:08:36.8196970Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0091s] [ 76%] 2024-08-07T18:08:36.8198194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 76%] 2024-08-07T18:08:36.8199439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0073s] [ 76%] 2024-08-07T18:08:36.8200667Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 76%] 2024-08-07T18:08:36.8201923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0112s] [ 76%] 2024-08-07T18:08:36.8203145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 76%] 2024-08-07T18:08:36.8204448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0081s] [ 76%] 2024-08-07T18:08:36.8205741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 76%] 2024-08-07T18:08:36.8206946Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0093s] [ 76%] 2024-08-07T18:08:36.8208176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0093s] [ 76%] 2024-08-07T18:08:36.8209389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0075s] [ 76%] 2024-08-07T18:08:36.8210635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 76%] 2024-08-07T18:08:36.8211869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0116s] [ 76%] 2024-08-07T18:08:36.8213133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 76%] 2024-08-07T18:08:36.8214406Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0079s] [ 76%] 2024-08-07T18:08:36.8215693Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 76%] 2024-08-07T18:08:36.8216902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0105s] [ 76%] 2024-08-07T18:08:36.8218123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 76%] 2024-08-07T18:08:36.8219361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0080s] [ 76%] 2024-08-07T18:08:36.8220579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 76%] 2024-08-07T18:08:36.8221841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0123s] [ 76%] 2024-08-07T18:08:36.8223152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0123s] [ 76%] 2024-08-07T18:08:36.8224443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0085s] [ 76%] 2024-08-07T18:08:36.8225662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 76%] 2024-08-07T18:08:36.8226886Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0108s] [ 76%] 2024-08-07T18:08:36.8228099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 76%] 2024-08-07T18:08:36.8229308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0082s] [ 76%] 2024-08-07T18:08:36.8230537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 76%] 2024-08-07T18:08:36.8231768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0124s] [ 76%] 2024-08-07T18:08:36.8233048Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0124s] [ 76%] 2024-08-07T18:08:36.8234310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0085s] [ 76%] 2024-08-07T18:08:36.8235547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 76%] 2024-08-07T18:08:36.8236765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 76%] 2024-08-07T18:08:36.8238003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 76%] 2024-08-07T18:08:36.8239213Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 76%] 2024-08-07T18:08:36.8240435Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 76%] 2024-08-07T18:08:36.8241727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0099s] [ 76%] 2024-08-07T18:08:36.8243001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 77%] 2024-08-07T18:08:36.8244230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 77%] 2024-08-07T18:08:36.8245450Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 77%] 2024-08-07T18:08:36.8246680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0084s] [ 77%] 2024-08-07T18:08:36.8247894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 77%] 2024-08-07T18:08:36.8249115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 77%] 2024-08-07T18:08:36.8250324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 77%] 2024-08-07T18:08:36.8251598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0107s] [ 77%] 2024-08-07T18:08:36.8252884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 77%] 2024-08-07T18:08:36.8254091Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 77%] 2024-08-07T18:08:36.8255322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 77%] 2024-08-07T18:08:36.8256548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0545s] [ 77%] 2024-08-07T18:08:36.8257807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0580s] [ 77%] 2024-08-07T18:08:36.8259031Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0332s] [ 77%] 2024-08-07T18:08:36.8260322Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0326s] [ 77%] 2024-08-07T18:08:36.8261626Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0757s] [ 77%] 2024-08-07T18:08:36.8262863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0778s] [ 77%] 2024-08-07T18:08:36.8264105Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0397s] [ 77%] 2024-08-07T18:08:36.8265337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.4338s] [ 77%] 2024-08-07T18:08:36.8266579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0412s] [ 77%] 2024-08-07T18:08:36.8267799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0420s] [ 77%] 2024-08-07T18:08:36.8269030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0346s] [ 77%] 2024-08-07T18:08:36.8270291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0342s] [ 77%] 2024-08-07T18:08:36.8271608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0533s] [ 77%] 2024-08-07T18:08:36.8272835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0538s] [ 77%] 2024-08-07T18:08:36.8274061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0404s] [ 77%] 2024-08-07T18:08:36.8275309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0406s] [ 77%] 2024-08-07T18:08:36.8276537Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0589s] [ 77%] 2024-08-07T18:08:36.8277787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0611s] [ 77%] 2024-08-07T18:08:36.8279055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0359s] [ 77%] 2024-08-07T18:08:36.8280368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0359s] [ 77%] 2024-08-07T18:08:36.8281615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0780s] [ 77%] 2024-08-07T18:08:36.8282868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0799s] [ 77%] 2024-08-07T18:08:36.8284103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0429s] [ 77%] 2024-08-07T18:08:36.8285344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0430s] [ 77%] 2024-08-07T18:08:36.8286581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0432s] [ 77%] 2024-08-07T18:08:36.8287801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0446s] [ 77%] 2024-08-07T18:08:36.8289079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0362s] [ 77%] 2024-08-07T18:08:36.8290357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0359s] [ 77%] 2024-08-07T18:08:36.8291617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0566s] [ 77%] 2024-08-07T18:08:36.8292848Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0572s] [ 77%] 2024-08-07T18:08:36.8294097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0443s] [ 77%] 2024-08-07T18:08:36.8295585Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.4349s] [ 77%] 2024-08-07T18:08:36.8296849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0703s] [ 77%] 2024-08-07T18:08:36.8298174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0735s] [ 77%] 2024-08-07T18:08:36.8299474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0455s] [ 77%] 2024-08-07T18:08:36.8300723Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0454s] [ 77%] 2024-08-07T18:08:36.8301974Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0907s] [ 77%] 2024-08-07T18:08:36.8303236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0932s] [ 77%] 2024-08-07T18:08:36.8304472Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0528s] [ 77%] 2024-08-07T18:08:36.8305722Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0525s] [ 77%] 2024-08-07T18:08:36.8306936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0528s] [ 77%] 2024-08-07T18:08:36.8308225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0539s] [ 77%] 2024-08-07T18:08:36.8309531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0448s] [ 77%] 2024-08-07T18:08:36.8310755Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0441s] [ 77%] 2024-08-07T18:08:36.8312024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0649s] [ 77%] 2024-08-07T18:08:36.8313260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0653s] [ 77%] 2024-08-07T18:08:36.8314505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0514s] [ 77%] 2024-08-07T18:08:36.8315733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0515s] [ 77%] 2024-08-07T18:08:36.8317017Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0531s] [ 77%] 2024-08-07T18:08:36.8318301Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0554s] [ 77%] 2024-08-07T18:08:36.8319526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0322s] [ 77%] 2024-08-07T18:08:36.8320773Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0319s] [ 77%] 2024-08-07T18:08:36.8322030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.4649s] [ 77%] 2024-08-07T18:08:36.8323328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0756s] [ 77%] 2024-08-07T18:08:36.8324560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0380s] [ 77%] 2024-08-07T18:08:36.8325807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0379s] [ 77%] 2024-08-07T18:08:36.8327073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0391s] [ 77%] 2024-08-07T18:08:36.8328375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0396s] [ 77%] 2024-08-07T18:08:36.8329586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0323s] [ 77%] 2024-08-07T18:08:36.8330813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0325s] [ 77%] 2024-08-07T18:08:36.8332079Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0509s] [ 77%] 2024-08-07T18:08:36.8333307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0519s] [ 77%] 2024-08-07T18:08:36.8334541Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0395s] [ 77%] 2024-08-07T18:08:36.8335815Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0388s] [ 77%] 2024-08-07T18:08:36.8337111Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0115s] [ 77%] 2024-08-07T18:08:36.8338356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0111s] [ 77%] 2024-08-07T18:08:36.8339593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0091s] [ 77%] 2024-08-07T18:08:36.8340819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 78%] 2024-08-07T18:08:36.8342068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0142s] [ 78%] 2024-08-07T18:08:36.8343312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0142s] [ 78%] 2024-08-07T18:08:36.8344529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0102s] [ 78%] 2024-08-07T18:08:36.8345818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0098s] [ 78%] 2024-08-07T18:08:36.8347075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0117s] [ 78%] 2024-08-07T18:08:36.8348309Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0120s] [ 78%] 2024-08-07T18:08:36.8349529Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0094s] [ 78%] 2024-08-07T18:08:36.8350768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 78%] 2024-08-07T18:08:36.8352001Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0143s] [ 78%] 2024-08-07T18:08:36.8353282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0145s] [ 78%] 2024-08-07T18:08:36.8354587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0105s] [ 78%] 2024-08-07T18:08:36.8355856Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0106s] [ 78%] 2024-08-07T18:08:36.8357092Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0121s] [ 78%] 2024-08-07T18:08:36.8358313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0124s] [ 78%] 2024-08-07T18:08:36.8359549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0099s] [ 78%] 2024-08-07T18:08:36.8360782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0096s] [ 78%] 2024-08-07T18:08:36.8362041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0147s] [ 78%] 2024-08-07T18:08:36.8363266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0148s] [ 78%] 2024-08-07T18:08:36.8364552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0101s] [ 78%] 2024-08-07T18:08:36.8365863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0101s] [ 78%] 2024-08-07T18:08:36.8367075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0124s] [ 78%] 2024-08-07T18:08:36.8368313Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0125s] [ 78%] 2024-08-07T18:08:36.8369532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0090s] [ 78%] 2024-08-07T18:08:36.8370767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0095s] [ 78%] 2024-08-07T18:08:36.8371998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0146s] [ 78%] 2024-08-07T18:08:36.8373279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0151s] [ 78%] 2024-08-07T18:08:36.8374544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0108s] [ 78%] 2024-08-07T18:08:36.8375784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0103s] [ 78%] 2024-08-07T18:08:36.8377002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0143s] [ 78%] 2024-08-07T18:08:36.8378228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0144s] [ 78%] 2024-08-07T18:08:36.8379486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0113s] [ 78%] 2024-08-07T18:08:36.8380697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0108s] [ 78%] 2024-08-07T18:08:36.8381957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0168s] [ 78%] 2024-08-07T18:08:36.8383230Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0167s] [ 78%] 2024-08-07T18:08:36.8384524Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0118s] [ 78%] 2024-08-07T18:08:36.8385752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0117s] [ 78%] 2024-08-07T18:08:36.8386987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0145s] [ 78%] 2024-08-07T18:08:36.8388228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0150s] [ 78%] 2024-08-07T18:08:36.8389438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0103s] [ 78%] 2024-08-07T18:08:36.8390674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0108s] [ 78%] 2024-08-07T18:08:36.8391957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0170s] [ 78%] 2024-08-07T18:08:36.8393253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0172s] [ 78%] 2024-08-07T18:08:36.8394467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0123s] [ 78%] 2024-08-07T18:08:36.8395955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0116s] [ 78%] 2024-08-07T18:08:36.8397183Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0102s] [ 78%] 2024-08-07T18:08:36.8398431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 78%] 2024-08-07T18:08:36.8399648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 78%] 2024-08-07T18:08:36.8400869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0080s] [ 78%] 2024-08-07T18:08:36.8402199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0132s] [ 78%] 2024-08-07T18:08:36.8403501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0132s] [ 78%] 2024-08-07T18:08:36.8404737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 78%] 2024-08-07T18:08:36.8405965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 78%] 2024-08-07T18:08:36.8407203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0105s] [ 78%] 2024-08-07T18:08:36.8408431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 78%] 2024-08-07T18:08:36.8409657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0080s] [ 78%] 2024-08-07T18:08:36.8410910Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 78%] 2024-08-07T18:08:36.8412209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0131s] [ 78%] 2024-08-07T18:08:36.8413454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0136s] [ 78%] 2024-08-07T18:08:36.8414650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 78%] 2024-08-07T18:08:36.8415892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 78%] 2024-08-07T18:08:36.8417115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 78%] 2024-08-07T18:08:36.8418350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 78%] 2024-08-07T18:08:36.8419562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0069s] [ 78%] 2024-08-07T18:08:36.8420849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 78%] 2024-08-07T18:08:36.8422149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0106s] [ 78%] 2024-08-07T18:08:36.8423417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 78%] 2024-08-07T18:08:36.8424657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 78%] 2024-08-07T18:08:36.8425888Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 78%] 2024-08-07T18:08:36.8427120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0090s] [ 78%] 2024-08-07T18:08:36.8428327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 78%] 2024-08-07T18:08:36.8429596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 78%] 2024-08-07T18:08:36.8430870Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 78%] 2024-08-07T18:08:36.8432096Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0115s] [ 78%] 2024-08-07T18:08:36.8433319Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0113s] [ 78%] 2024-08-07T18:08:36.8434532Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 78%] 2024-08-07T18:08:36.8435788Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 78%] 2024-08-07T18:08:36.8436992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0095s] [ 79%] 2024-08-07T18:08:36.8438228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0096s] [ 79%] 2024-08-07T18:08:36.8439484Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0076s] [ 79%] 2024-08-07T18:08:36.8440781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 79%] 2024-08-07T18:08:36.8442003Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0114s] [ 79%] 2024-08-07T18:08:36.8443245Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0115s] [ 79%] 2024-08-07T18:08:36.8444464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 79%] 2024-08-07T18:08:36.8445701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 79%] 2024-08-07T18:08:36.8446924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0096s] [ 79%] 2024-08-07T18:08:36.8448174Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0094s] [ 79%] 2024-08-07T18:08:36.8449440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 79%] 2024-08-07T18:08:36.8450648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 79%] 2024-08-07T18:08:36.8451901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0118s] [ 79%] 2024-08-07T18:08:36.8453122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0115s] [ 79%] 2024-08-07T18:08:36.8454356Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0087s] [ 79%] 2024-08-07T18:08:36.8455571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 79%] 2024-08-07T18:08:36.8456779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0112s] [ 79%] 2024-08-07T18:08:36.8458061Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0116s] [ 79%] 2024-08-07T18:08:36.8459979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0088s] [ 79%] 2024-08-07T18:08:36.8461206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 79%] 2024-08-07T18:08:36.8462425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0132s] [ 79%] 2024-08-07T18:08:36.8463676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0136s] [ 79%] 2024-08-07T18:08:36.8464901Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0093s] [ 79%] 2024-08-07T18:08:36.8466141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 79%] 2024-08-07T18:08:36.8467398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0113s] [ 79%] 2024-08-07T18:08:36.8468670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0120s] [ 79%] 2024-08-07T18:08:36.8469891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0090s] [ 79%] 2024-08-07T18:08:36.8471100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 79%] 2024-08-07T18:08:36.8472331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0134s] [ 79%] 2024-08-07T18:08:36.8473558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0137s] [ 79%] 2024-08-07T18:08:36.8474786Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0095s] [ 79%] 2024-08-07T18:08:36.8475998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 79%] 2024-08-07T18:08:36.8477268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0081s] [ 79%] 2024-08-07T18:08:36.8478539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 79%] 2024-08-07T18:08:36.8479750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 79%] 2024-08-07T18:08:36.8480982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 79%] 2024-08-07T18:08:36.8482200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0107s] [ 79%] 2024-08-07T18:08:36.8483464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 79%] 2024-08-07T18:08:36.8484676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 79%] 2024-08-07T18:08:36.8485958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 79%] 2024-08-07T18:08:36.8487236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0085s] [ 79%] 2024-08-07T18:08:36.8488461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 79%] 2024-08-07T18:08:36.8489663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 79%] 2024-08-07T18:08:36.8490879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 79%] 2024-08-07T18:08:36.8492119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0107s] [ 79%] 2024-08-07T18:08:36.8493355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0108s] [ 79%] 2024-08-07T18:08:36.8494579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 79%] 2024-08-07T18:08:36.8496085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_256_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 79%] 2024-08-07T18:08:36.8497419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 79%] 2024-08-07T18:08:36.8498640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 79%] 2024-08-07T18:08:36.8499878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 79%] 2024-08-07T18:08:36.8501110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 79%] 2024-08-07T18:08:36.8502346Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 79%] 2024-08-07T18:08:36.8503611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 79%] 2024-08-07T18:08:36.8504831Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 79%] 2024-08-07T18:08:36.8506140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 79%] 2024-08-07T18:08:36.8507417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 79%] 2024-08-07T18:08:36.8508651Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 79%] 2024-08-07T18:08:36.8509864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 79%] 2024-08-07T18:08:36.8511103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 79%] 2024-08-07T18:08:36.8512314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 79%] 2024-08-07T18:08:36.8513549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 79%] 2024-08-07T18:08:36.8514835Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 79%] 2024-08-07T18:08:36.8516108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 79%] 2024-08-07T18:08:36.8517339Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 79%] 2024-08-07T18:08:36.8518553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 79%] 2024-08-07T18:08:36.8519789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 79%] 2024-08-07T18:08:36.8521018Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 79%] 2024-08-07T18:08:36.8522257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 79%] 2024-08-07T18:08:36.8523544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 79%] 2024-08-07T18:08:36.8524822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 79%] 2024-08-07T18:08:36.8526122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 79%] 2024-08-07T18:08:36.8527329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 79%] 2024-08-07T18:08:36.8528571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 79%] 2024-08-07T18:08:36.8529787Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 79%] 2024-08-07T18:08:36.8531015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 79%] 2024-08-07T18:08:36.8532227Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 79%] 2024-08-07T18:08:36.8533535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 80%] 2024-08-07T18:08:36.8534800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 80%] 2024-08-07T18:08:36.8536026Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 80%] 2024-08-07T18:08:36.8537257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0074s] [ 80%] 2024-08-07T18:08:36.8538488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 80%] 2024-08-07T18:08:36.8539731Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 80%] 2024-08-07T18:08:36.8540950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 80%] 2024-08-07T18:08:36.8542188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 80%] 2024-08-07T18:08:36.8543481Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 80%] 2024-08-07T18:08:36.8544766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 80%] 2024-08-07T18:08:36.8545992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 80%] 2024-08-07T18:08:36.8547203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 80%] 2024-08-07T18:08:36.8548442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 80%] 2024-08-07T18:08:36.8549652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 80%] 2024-08-07T18:08:36.8550881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 80%] 2024-08-07T18:08:36.8552137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 80%] 2024-08-07T18:08:36.8553458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 80%] 2024-08-07T18:08:36.8554662Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 80%] 2024-08-07T18:08:36.8555897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 80%] 2024-08-07T18:08:36.8557110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 80%] 2024-08-07T18:08:36.8558338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 80%] 2024-08-07T18:08:36.8559561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 80%] 2024-08-07T18:08:36.8560777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 80%] 2024-08-07T18:08:36.8562055Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 80%] 2024-08-07T18:08:36.8563369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 80%] 2024-08-07T18:08:36.8564596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 80%] 2024-08-07T18:08:36.8565816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 80%] 2024-08-07T18:08:36.8567056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 80%] 2024-08-07T18:08:36.8568265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 80%] 2024-08-07T18:08:36.8569468Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 80%] 2024-08-07T18:08:36.8570743Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 80%] 2024-08-07T18:08:36.8572004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 80%] 2024-08-07T18:08:36.8573261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 80%] 2024-08-07T18:08:36.8574469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 80%] 2024-08-07T18:08:36.8575712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 80%] 2024-08-07T18:08:36.8576935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 80%] 2024-08-07T18:08:36.8578178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 80%] 2024-08-07T18:08:36.8579389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 80%] 2024-08-07T18:08:36.8580659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 80%] 2024-08-07T18:08:36.8581948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 80%] 2024-08-07T18:08:36.8583192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 80%] 2024-08-07T18:08:36.8584429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 80%] 2024-08-07T18:08:36.8585706Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 80%] 2024-08-07T18:08:36.8586947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 80%] 2024-08-07T18:08:36.8588156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 80%] 2024-08-07T18:08:36.8589430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 80%] 2024-08-07T18:08:36.8590703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 80%] 2024-08-07T18:08:36.8591919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 80%] 2024-08-07T18:08:36.8593179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 80%] 2024-08-07T18:08:36.8594404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 80%] 2024-08-07T18:08:36.8595939Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 80%] 2024-08-07T18:08:36.8597168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 80%] 2024-08-07T18:08:36.8598404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 80%] 2024-08-07T18:08:36.8599694Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 80%] 2024-08-07T18:08:36.8601009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 80%] 2024-08-07T18:08:36.8602223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 80%] 2024-08-07T18:08:36.8603499Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 80%] 2024-08-07T18:08:36.8604746Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 80%] 2024-08-07T18:08:36.8606019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 80%] 2024-08-07T18:08:36.8607247Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 80%] 2024-08-07T18:08:36.8608531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 80%] 2024-08-07T18:08:36.8609829Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 80%] 2024-08-07T18:08:36.8611038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 80%] 2024-08-07T18:08:36.8612265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 80%] 2024-08-07T18:08:36.8613504Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 80%] 2024-08-07T18:08:36.8614730Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 80%] 2024-08-07T18:08:36.8615967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 80%] 2024-08-07T18:08:36.8617178Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0096s] [ 80%] 2024-08-07T18:08:36.8618466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 80%] 2024-08-07T18:08:36.8619764Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 80%] 2024-08-07T18:08:36.8621015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 80%] 2024-08-07T18:08:36.8622234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0106s] [ 80%] 2024-08-07T18:08:36.8623544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0104s] [ 80%] 2024-08-07T18:08:36.8624784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 80%] 2024-08-07T18:08:36.8626008Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 80%] 2024-08-07T18:08:36.8627234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0074s] [ 80%] 2024-08-07T18:08:36.8628498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 80%] 2024-08-07T18:08:36.8629781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0071s] [ 80%] 2024-08-07T18:08:36.8630987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 81%] 2024-08-07T18:08:36.8632225Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0082s] [ 81%] 2024-08-07T18:08:36.8633461Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 81%] 2024-08-07T18:08:36.8634690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 81%] 2024-08-07T18:08:36.8635923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 81%] 2024-08-07T18:08:36.8637176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0068s] [ 81%] 2024-08-07T18:08:36.8638469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 81%] 2024-08-07T18:08:36.8639673Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 81%] 2024-08-07T18:08:36.8640909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 81%] 2024-08-07T18:08:36.8642125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 81%] 2024-08-07T18:08:36.8643385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 81%] 2024-08-07T18:08:36.8644594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 81%] 2024-08-07T18:08:36.8645827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 81%] 2024-08-07T18:08:36.8647077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 81%] 2024-08-07T18:08:36.8648336Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 81%] 2024-08-07T18:08:36.8649553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 81%] 2024-08-07T18:08:36.8650769Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 81%] 2024-08-07T18:08:36.8652009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 81%] 2024-08-07T18:08:36.8653262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 81%] 2024-08-07T18:08:36.8654503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 81%] 2024-08-07T18:08:36.8655759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 81%] 2024-08-07T18:08:36.8657033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0056s] [ 81%] 2024-08-07T18:08:36.8658264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 81%] 2024-08-07T18:08:36.8659470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0053s] [ 81%] 2024-08-07T18:08:36.8660702Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 81%] 2024-08-07T18:08:36.8661920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 81%] 2024-08-07T18:08:36.8663155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 81%] 2024-08-07T18:08:36.8664389Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 81%] 2024-08-07T18:08:36.8665714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 81%] 2024-08-07T18:08:36.8666964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 81%] 2024-08-07T18:08:36.8668190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 81%] 2024-08-07T18:08:36.8669375Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 81%] 2024-08-07T18:08:36.8670579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 81%] 2024-08-07T18:08:36.8671804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 81%] 2024-08-07T18:08:36.8673012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 81%] 2024-08-07T18:08:36.8674291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 81%] 2024-08-07T18:08:36.8675551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 81%] 2024-08-07T18:08:36.8676772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 81%] 2024-08-07T18:08:36.8677979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 81%] 2024-08-07T18:08:36.8679205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 81%] 2024-08-07T18:08:36.8680421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0053s] [ 81%] 2024-08-07T18:08:36.8681625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 81%] 2024-08-07T18:08:36.8682852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 81%] 2024-08-07T18:08:36.8684122Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 81%] 2024-08-07T18:08:36.8685404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 81%] 2024-08-07T18:08:36.8686597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 81%] 2024-08-07T18:08:36.8687811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 81%] 2024-08-07T18:08:36.8689009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 81%] 2024-08-07T18:08:36.8690232Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 81%] 2024-08-07T18:08:36.8691433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 81%] 2024-08-07T18:08:36.8692677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 81%] 2024-08-07T18:08:36.8693966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 81%] 2024-08-07T18:08:36.8696130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 81%] 2024-08-07T18:08:36.8697494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 81%] 2024-08-07T18:08:36.8698720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0054s] [ 81%] 2024-08-07T18:08:36.8699961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 81%] 2024-08-07T18:08:36.8701170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 81%] 2024-08-07T18:08:36.8702396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 81%] 2024-08-07T18:08:36.8703785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 81%] 2024-08-07T18:08:36.8705097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 81%] 2024-08-07T18:08:36.8706330Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 81%] 2024-08-07T18:08:36.8707525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 81%] 2024-08-07T18:08:36.8708748Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 81%] 2024-08-07T18:08:36.8709949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 81%] 2024-08-07T18:08:36.8711171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 81%] 2024-08-07T18:08:36.8712369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 81%] 2024-08-07T18:08:36.8713659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 81%] 2024-08-07T18:08:36.8714953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 81%] 2024-08-07T18:08:36.8716208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 81%] 2024-08-07T18:08:36.8717434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 81%] 2024-08-07T18:08:36.8718650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 81%] 2024-08-07T18:08:36.8719863Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 81%] 2024-08-07T18:08:36.8721063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 81%] 2024-08-07T18:08:36.8722328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 81%] 2024-08-07T18:08:36.8723594Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 81%] 2024-08-07T18:08:36.8724806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 81%] 2024-08-07T18:08:36.8726065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 81%] 2024-08-07T18:08:36.8727274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 82%] 2024-08-07T18:08:36.8728508Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 82%] 2024-08-07T18:08:36.8729696Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 82%] 2024-08-07T18:08:36.8730951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 82%] 2024-08-07T18:08:36.8732193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 82%] 2024-08-07T18:08:36.8733469Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 82%] 2024-08-07T18:08:36.8734666Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 82%] 2024-08-07T18:08:36.8735874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 82%] 2024-08-07T18:08:36.8737116Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0108s] [ 82%] 2024-08-07T18:08:36.8738344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 82%] 2024-08-07T18:08:36.8739574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 82%] 2024-08-07T18:08:36.8740836Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 82%] 2024-08-07T18:08:36.8742130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0116s] [ 82%] 2024-08-07T18:08:36.8743351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0117s] [ 82%] 2024-08-07T18:08:36.8744582Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 82%] 2024-08-07T18:08:36.8745824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 82%] 2024-08-07T18:08:36.8747038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0071s] [ 82%] 2024-08-07T18:08:36.8748269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 82%] 2024-08-07T18:08:36.8749467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 82%] 2024-08-07T18:08:36.8750745Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 82%] 2024-08-07T18:08:36.8752012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 82%] 2024-08-07T18:08:36.8753244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 82%] 2024-08-07T18:08:36.8754446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 82%] 2024-08-07T18:08:36.8755698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 82%] 2024-08-07T18:08:36.8756921Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0121s] [ 82%] 2024-08-07T18:08:36.8758137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0120s] [ 82%] 2024-08-07T18:08:36.8759371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0077s] [ 82%] 2024-08-07T18:08:36.8760640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 82%] 2024-08-07T18:08:36.8761925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0134s] [ 82%] 2024-08-07T18:08:36.8763150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0133s] [ 82%] 2024-08-07T18:08:36.8764388Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 82%] 2024-08-07T18:08:36.8765633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 82%] 2024-08-07T18:08:36.8766855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 82%] 2024-08-07T18:08:36.8768063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 82%] 2024-08-07T18:08:36.8769328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 82%] 2024-08-07T18:08:36.8770615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 82%] 2024-08-07T18:08:36.8771818Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0086s] [ 82%] 2024-08-07T18:08:36.8773057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 82%] 2024-08-07T18:08:36.8774268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0080s] [ 82%] 2024-08-07T18:08:36.8775522Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 82%] 2024-08-07T18:08:36.8776736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0154s] [ 82%] 2024-08-07T18:08:36.8777972Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0153s] [ 82%] 2024-08-07T18:08:36.8779231Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0094s] [ 82%] 2024-08-07T18:08:36.8780502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0095s] [ 82%] 2024-08-07T18:08:36.8781737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0167s] [ 82%] 2024-08-07T18:08:36.8782962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0167s] [ 82%] 2024-08-07T18:08:36.8784207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0105s] [ 82%] 2024-08-07T18:08:36.8785443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0104s] [ 82%] 2024-08-07T18:08:36.8786664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0099s] [ 82%] 2024-08-07T18:08:36.8787916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 82%] 2024-08-07T18:08:36.8789198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0088s] [ 82%] 2024-08-07T18:08:36.8790405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 82%] 2024-08-07T18:08:36.8791639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0108s] [ 82%] 2024-08-07T18:08:36.8792883Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 82%] 2024-08-07T18:08:36.8794100Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0098s] [ 82%] 2024-08-07T18:08:36.8795778Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0100s] [ 82%] 2024-08-07T18:08:36.8797022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0099s] [ 82%] 2024-08-07T18:08:36.8798350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0097s] [ 82%] 2024-08-07T18:08:36.8799636Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 82%] 2024-08-07T18:08:36.8800873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 82%] 2024-08-07T18:08:36.8802102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0107s] [ 82%] 2024-08-07T18:08:36.8803329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 82%] 2024-08-07T18:08:36.8804588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 82%] 2024-08-07T18:08:36.8805828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 82%] 2024-08-07T18:08:36.8807109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0066s] [ 82%] 2024-08-07T18:08:36.8808386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 82%] 2024-08-07T18:08:36.8809607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 82%] 2024-08-07T18:08:36.8810814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 82%] 2024-08-07T18:08:36.8812044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 82%] 2024-08-07T18:08:36.8813262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 82%] 2024-08-07T18:08:36.8814470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 82%] 2024-08-07T18:08:36.8815768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 82%] 2024-08-07T18:08:36.8817025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 82%] 2024-08-07T18:08:36.8818316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0054s] [ 82%] 2024-08-07T18:08:36.8819523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 82%] 2024-08-07T18:08:36.8820752Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 82%] 2024-08-07T18:08:36.8821967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 82%] 2024-08-07T18:08:36.8823210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 82%] 2024-08-07T18:08:36.8824425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 83%] 2024-08-07T18:08:36.8825714Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 83%] 2024-08-07T18:08:36.8826996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 83%] 2024-08-07T18:08:36.8828208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 83%] 2024-08-07T18:08:36.8829447Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 83%] 2024-08-07T18:08:36.8830663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 83%] 2024-08-07T18:08:36.8831894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 83%] 2024-08-07T18:08:36.8833106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 83%] 2024-08-07T18:08:36.8834328Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 83%] 2024-08-07T18:08:36.8835601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 83%] 2024-08-07T18:08:36.8836884Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 83%] 2024-08-07T18:08:36.8838113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 83%] 2024-08-07T18:08:36.8839324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 83%] 2024-08-07T18:08:36.8840560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 83%] 2024-08-07T18:08:36.8841790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 83%] 2024-08-07T18:08:36.8843020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 83%] 2024-08-07T18:08:36.8844280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 83%] 2024-08-07T18:08:36.8845607Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 83%] 2024-08-07T18:08:36.8846810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 83%] 2024-08-07T18:08:36.8848015Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 83%] 2024-08-07T18:08:36.8849238Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 83%] 2024-08-07T18:08:36.8850454Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 83%] 2024-08-07T18:08:36.8851681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 83%] 2024-08-07T18:08:36.8852892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 83%] 2024-08-07T18:08:36.8854159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 83%] 2024-08-07T18:08:36.8855448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 83%] 2024-08-07T18:08:36.8856670Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 83%] 2024-08-07T18:08:36.8857885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 83%] 2024-08-07T18:08:36.8859098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 83%] 2024-08-07T18:08:36.8860337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 83%] 2024-08-07T18:08:36.8861551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 83%] 2024-08-07T18:08:36.8862834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 83%] 2024-08-07T18:08:36.8864104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 83%] 2024-08-07T18:08:36.8865358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 83%] 2024-08-07T18:08:36.8866563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 83%] 2024-08-07T18:08:36.8867792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 83%] 2024-08-07T18:08:36.8868998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 83%] 2024-08-07T18:08:36.8870203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 83%] 2024-08-07T18:08:36.8871421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 83%] 2024-08-07T18:08:36.8872682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 83%] 2024-08-07T18:08:36.8873960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 83%] 2024-08-07T18:08:36.8875175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 83%] 2024-08-07T18:08:36.8876402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0056s] [ 83%] 2024-08-07T18:08:36.8877616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0054s] [ 83%] 2024-08-07T18:08:36.8878841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 83%] 2024-08-07T18:08:36.8880042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 83%] 2024-08-07T18:08:36.8881236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0064s] [ 83%] 2024-08-07T18:08:36.8882511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 83%] 2024-08-07T18:08:36.8883772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 83%] 2024-08-07T18:08:36.8885004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 83%] 2024-08-07T18:08:36.8886204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 83%] 2024-08-07T18:08:36.8887430Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 83%] 2024-08-07T18:08:36.8888616Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 83%] 2024-08-07T18:08:36.8889828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 83%] 2024-08-07T18:08:36.8891071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 83%] 2024-08-07T18:08:36.8892333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 83%] 2024-08-07T18:08:36.8893546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 83%] 2024-08-07T18:08:36.8894750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 83%] 2024-08-07T18:08:36.8896452Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0056s] [ 83%] 2024-08-07T18:08:36.8897695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 83%] 2024-08-07T18:08:36.8898945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 83%] 2024-08-07T18:08:36.8900150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 83%] 2024-08-07T18:08:36.8901467Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 83%] 2024-08-07T18:08:36.8902753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 83%] 2024-08-07T18:08:36.8903961Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 83%] 2024-08-07T18:08:36.8905192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 83%] 2024-08-07T18:08:36.8906412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 83%] 2024-08-07T18:08:36.8907649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 83%] 2024-08-07T18:08:36.8908844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 83%] 2024-08-07T18:08:36.8910144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 83%] 2024-08-07T18:08:36.8911428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 83%] 2024-08-07T18:08:36.8912652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 83%] 2024-08-07T18:08:36.8913857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 83%] 2024-08-07T18:08:36.8915073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 83%] 2024-08-07T18:08:36.8916373Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 83%] 2024-08-07T18:08:36.8917595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0055s] [ 83%] 2024-08-07T18:08:36.8918817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 83%] 2024-08-07T18:08:36.8920114Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 84%] 2024-08-07T18:08:36.8921405Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 84%] 2024-08-07T18:08:36.8922639Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 84%] 2024-08-07T18:08:36.8923865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 84%] 2024-08-07T18:08:36.8925084Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 84%] 2024-08-07T18:08:36.8926314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 84%] 2024-08-07T18:08:36.8927538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 84%] 2024-08-07T18:08:36.8928732Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 84%] 2024-08-07T18:08:36.8929996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 84%] 2024-08-07T18:08:36.8931250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 84%] 2024-08-07T18:08:36.8932471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 84%] 2024-08-07T18:08:36.8933675Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 84%] 2024-08-07T18:08:36.8934985Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 84%] 2024-08-07T18:08:36.8936214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 84%] 2024-08-07T18:08:36.8937421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 84%] 2024-08-07T18:08:36.8938688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 84%] 2024-08-07T18:08:36.8939963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0054s] [ 84%] 2024-08-07T18:08:36.8941189Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 84%] 2024-08-07T18:08:36.8942402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 84%] 2024-08-07T18:08:36.8943633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 84%] 2024-08-07T18:08:36.8944855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 84%] 2024-08-07T18:08:36.8946090Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 84%] 2024-08-07T18:08:36.8947292Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 84%] 2024-08-07T18:08:36.8948536Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 84%] 2024-08-07T18:08:36.8949813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 84%] 2024-08-07T18:08:36.8951022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 84%] 2024-08-07T18:08:36.8952249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 84%] 2024-08-07T18:08:36.8953457Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 84%] 2024-08-07T18:08:36.8954690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 84%] 2024-08-07T18:08:36.8955941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 84%] 2024-08-07T18:08:36.8957211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 84%] 2024-08-07T18:08:36.8958458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 84%] 2024-08-07T18:08:36.8959664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0054s] [ 84%] 2024-08-07T18:08:36.8960890Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 84%] 2024-08-07T18:08:36.8962108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 84%] 2024-08-07T18:08:36.8963343Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 84%] 2024-08-07T18:08:36.8964549Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 84%] 2024-08-07T18:08:36.8965780Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 84%] 2024-08-07T18:08:36.8967021Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 84%] 2024-08-07T18:08:36.8968290Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 84%] 2024-08-07T18:08:36.8969486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 84%] 2024-08-07T18:08:36.8970698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 84%] 2024-08-07T18:08:36.8971925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 84%] 2024-08-07T18:08:36.8973131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 84%] 2024-08-07T18:08:36.8974352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_4_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 84%] 2024-08-07T18:08:36.8975579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0293s] [ 84%] 2024-08-07T18:08:36.8976885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0302s] [ 84%] 2024-08-07T18:08:36.8978173Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0191s] [ 84%] 2024-08-07T18:08:36.8979419Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0189s] [ 84%] 2024-08-07T18:08:36.8980653Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0410s] [ 84%] 2024-08-07T18:08:36.8981895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0414s] [ 84%] 2024-08-07T18:08:36.8983134Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0231s] [ 84%] 2024-08-07T18:08:36.8984359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0229s] [ 84%] 2024-08-07T18:08:36.8985645Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0296s] [ 84%] 2024-08-07T18:08:36.8986935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0299s] [ 84%] 2024-08-07T18:08:36.8988166Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0198s] [ 84%] 2024-08-07T18:08:36.8989383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0194s] [ 84%] 2024-08-07T18:08:36.8990625Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0407s] [ 84%] 2024-08-07T18:08:36.8991858Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0411s] [ 84%] 2024-08-07T18:08:36.8993081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0237s] [ 84%] 2024-08-07T18:08:36.8994367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0239s] [ 84%] 2024-08-07T18:08:36.8996047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0324s] [ 84%] 2024-08-07T18:08:36.8997348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0337s] [ 84%] 2024-08-07T18:08:36.8998592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0217s] [ 84%] 2024-08-07T18:08:36.8999854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0212s] [ 84%] 2024-08-07T18:08:36.9001075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0434s] [ 84%] 2024-08-07T18:08:36.9002326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0438s] [ 84%] 2024-08-07T18:08:36.9003547Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0254s] [ 84%] 2024-08-07T18:08:36.9004860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0253s] [ 84%] 2024-08-07T18:08:36.9006171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0322s] [ 84%] 2024-08-07T18:08:36.9007398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0332s] [ 84%] 2024-08-07T18:08:36.9008634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0223s] [ 84%] 2024-08-07T18:08:36.9009864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0219s] [ 84%] 2024-08-07T18:08:36.9011117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0431s] [ 84%] 2024-08-07T18:08:36.9012342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0435s] [ 84%] 2024-08-07T18:08:36.9013674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0259s] [ 84%] 2024-08-07T18:08:36.9014980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0260s] [ 84%] 2024-08-07T18:08:36.9016255Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0370s] [ 85%] 2024-08-07T18:08:36.9017513Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0396s] [ 85%] 2024-08-07T18:08:36.9018744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0260s] [ 85%] 2024-08-07T18:08:36.9020002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0258s] [ 85%] 2024-08-07T18:08:36.9021229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0492s] [ 85%] 2024-08-07T18:08:36.9022480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0492s] [ 85%] 2024-08-07T18:08:36.9023750Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0303s] [ 85%] 2024-08-07T18:08:36.9025062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0297s] [ 85%] 2024-08-07T18:08:36.9026275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0377s] [ 85%] 2024-08-07T18:08:36.9027502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0391s] [ 85%] 2024-08-07T18:08:36.9028744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0262s] [ 85%] 2024-08-07T18:08:36.9029976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0260s] [ 85%] 2024-08-07T18:08:36.9031216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0482s] [ 85%] 2024-08-07T18:08:36.9032490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0487s] [ 85%] 2024-08-07T18:08:36.9033784Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0309s] [ 85%] 2024-08-07T18:08:36.9035013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0306s] [ 85%] 2024-08-07T18:08:36.9036249Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0275s] [ 85%] 2024-08-07T18:08:36.9037480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0282s] [ 85%] 2024-08-07T18:08:36.9038711Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0183s] [ 85%] 2024-08-07T18:08:36.9039955Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0181s] [ 85%] 2024-08-07T18:08:36.9041177Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0395s] [ 85%] 2024-08-07T18:08:36.9042473Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0399s] [ 85%] 2024-08-07T18:08:36.9043756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0221s] [ 85%] 2024-08-07T18:08:36.9045002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0222s] [ 85%] 2024-08-07T18:08:36.9046214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0278s] [ 85%] 2024-08-07T18:08:36.9047464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0286s] [ 85%] 2024-08-07T18:08:36.9048688Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0185s] [ 85%] 2024-08-07T18:08:36.9049908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0184s] [ 85%] 2024-08-07T18:08:36.9051208Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0390s] [ 85%] 2024-08-07T18:08:36.9052490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0399s] [ 85%] 2024-08-07T18:08:36.9053726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0228s] [ 85%] 2024-08-07T18:08:36.9054947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0230s] [ 85%] 2024-08-07T18:08:36.9056197Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.4528s] [ 85%] 2024-08-07T18:08:36.9057441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0566s] [ 85%] 2024-08-07T18:08:36.9058685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0336s] [ 85%] 2024-08-07T18:08:36.9059911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0334s] [ 85%] 2024-08-07T18:08:36.9061179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0775s] [ 85%] 2024-08-07T18:08:36.9062487Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0784s] [ 85%] 2024-08-07T18:08:36.9063716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0413s] [ 85%] 2024-08-07T18:08:36.9064967Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0415s] [ 85%] 2024-08-07T18:08:36.9066188Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0517s] [ 85%] 2024-08-07T18:08:36.9067437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0537s] [ 85%] 2024-08-07T18:08:36.9068649Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0356s] [ 85%] 2024-08-07T18:08:36.9069933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0352s] [ 85%] 2024-08-07T18:08:36.9071210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0724s] [ 85%] 2024-08-07T18:08:36.9072433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0737s] [ 85%] 2024-08-07T18:08:36.9073678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0428s] [ 85%] 2024-08-07T18:08:36.9074912Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0422s] [ 85%] 2024-08-07T18:08:36.9076168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0581s] [ 85%] 2024-08-07T18:08:36.9077399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0618s] [ 85%] 2024-08-07T18:08:36.9078634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0380s] [ 85%] 2024-08-07T18:08:36.9079908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0380s] [ 85%] 2024-08-07T18:08:36.9081210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0811s] [ 85%] 2024-08-07T18:08:36.9082437Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0825s] [ 85%] 2024-08-07T18:08:36.9083682Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0460s] [ 85%] 2024-08-07T18:08:36.9084920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0460s] [ 85%] 2024-08-07T18:08:36.9086161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0559s] [ 85%] 2024-08-07T18:08:36.9087402Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.4444s] [ 85%] 2024-08-07T18:08:36.9088659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0388s] [ 85%] 2024-08-07T18:08:36.9089957Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0387s] [ 85%] 2024-08-07T18:08:36.9091171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0777s] [ 85%] 2024-08-07T18:08:36.9092413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0783s] [ 85%] 2024-08-07T18:08:36.9093635Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0470s] [ 85%] 2024-08-07T18:08:36.9094885Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0467s] [ 85%] 2024-08-07T18:08:36.9096533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0688s] [ 85%] 2024-08-07T18:08:36.9097767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0732s] [ 85%] 2024-08-07T18:08:36.9099093Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0458s] [ 85%] 2024-08-07T18:08:36.9100396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0455s] [ 85%] 2024-08-07T18:08:36.9101668Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0915s] [ 85%] 2024-08-07T18:08:36.9102904Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0934s] [ 85%] 2024-08-07T18:08:36.9104154Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0538s] [ 85%] 2024-08-07T18:08:36.9105392Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0534s] [ 85%] 2024-08-07T18:08:36.9106633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0657s] [ 85%] 2024-08-07T18:08:36.9107914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0686s] [ 85%] 2024-08-07T18:08:36.9109201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0460s] [ 85%] 2024-08-07T18:08:36.9110439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0460s] [ 85%] 2024-08-07T18:08:36.9111659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0882s] [ 85%] 2024-08-07T18:08:36.9112911Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.4690s] [ 85%] 2024-08-07T18:08:36.9114139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0557s] [ 86%] 2024-08-07T18:08:36.9115383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0543s] [ 86%] 2024-08-07T18:08:36.9116648Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0513s] [ 86%] 2024-08-07T18:08:36.9117951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0529s] [ 86%] 2024-08-07T18:08:36.9119244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0316s] [ 86%] 2024-08-07T18:08:36.9120476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0326s] [ 86%] 2024-08-07T18:08:36.9121716Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0743s] [ 86%] 2024-08-07T18:08:36.9122949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0765s] [ 86%] 2024-08-07T18:08:36.9124195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0398s] [ 86%] 2024-08-07T18:08:36.9125422Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0397s] [ 86%] 2024-08-07T18:08:36.9126727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0489s] [ 86%] 2024-08-07T18:08:36.9128005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0502s] [ 86%] 2024-08-07T18:08:36.9129237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0331s] [ 86%] 2024-08-07T18:08:36.9130453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0336s] [ 86%] 2024-08-07T18:08:36.9131674Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0697s] [ 86%] 2024-08-07T18:08:36.9132926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0710s] [ 86%] 2024-08-07T18:08:36.9134147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0416s] [ 86%] 2024-08-07T18:08:36.9135390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0398s] [ 86%] 2024-08-07T18:08:36.9136664Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0134s] [ 86%] 2024-08-07T18:08:36.9137963Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0130s] [ 86%] 2024-08-07T18:08:36.9139184Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0099s] [ 86%] 2024-08-07T18:08:36.9140416Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0106s] [ 86%] 2024-08-07T18:08:36.9141640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0167s] [ 86%] 2024-08-07T18:08:36.9142871Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0169s] [ 86%] 2024-08-07T18:08:36.9144108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0106s] [ 86%] 2024-08-07T18:08:36.9145371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0103s] [ 86%] 2024-08-07T18:08:36.9146650Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0132s] [ 86%] 2024-08-07T18:08:36.9147865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0133s] [ 86%] 2024-08-07T18:08:36.9149085Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0100s] [ 86%] 2024-08-07T18:08:36.9150299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0102s] [ 86%] 2024-08-07T18:08:36.9151539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0172s] [ 86%] 2024-08-07T18:08:36.9152756Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0166s] [ 86%] 2024-08-07T18:08:36.9153962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0107s] [ 86%] 2024-08-07T18:08:36.9155248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0103s] [ 86%] 2024-08-07T18:08:36.9156516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0145s] [ 86%] 2024-08-07T18:08:36.9157763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0147s] [ 86%] 2024-08-07T18:08:36.9158976Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0107s] [ 86%] 2024-08-07T18:08:36.9160218Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0102s] [ 86%] 2024-08-07T18:08:36.9161441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0179s] [ 86%] 2024-08-07T18:08:36.9162690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0180s] [ 86%] 2024-08-07T18:08:36.9163953Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0108s] [ 86%] 2024-08-07T18:08:36.9165237Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0110s] [ 86%] 2024-08-07T18:08:36.9166464Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0148s] [ 86%] 2024-08-07T18:08:36.9167690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0146s] [ 86%] 2024-08-07T18:08:36.9168922Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0106s] [ 86%] 2024-08-07T18:08:36.9170143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0105s] [ 86%] 2024-08-07T18:08:36.9171374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0178s] [ 86%] 2024-08-07T18:08:36.9172591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0177s] [ 86%] 2024-08-07T18:08:36.9173864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0108s] [ 86%] 2024-08-07T18:08:36.9175140Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0108s] [ 86%] 2024-08-07T18:08:36.9176378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0183s] [ 86%] 2024-08-07T18:08:36.9177634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0183s] [ 86%] 2024-08-07T18:08:36.9178857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0132s] [ 86%] 2024-08-07T18:08:36.9180106Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0129s] [ 86%] 2024-08-07T18:08:36.9181320Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0218s] [ 86%] 2024-08-07T18:08:36.9182608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0218s] [ 86%] 2024-08-07T18:08:36.9183878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0139s] [ 86%] 2024-08-07T18:08:36.9185143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0136s] [ 86%] 2024-08-07T18:08:36.9186350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0186s] [ 86%] 2024-08-07T18:08:36.9187586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0187s] [ 86%] 2024-08-07T18:08:36.9188823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0134s] [ 86%] 2024-08-07T18:08:36.9190039Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0133s] [ 86%] 2024-08-07T18:08:36.9191268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0221s] [ 86%] 2024-08-07T18:08:36.9192545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0220s] [ 86%] 2024-08-07T18:08:36.9193827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0136s] [ 86%] 2024-08-07T18:08:36.9195327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0137s] [ 86%] 2024-08-07T18:08:36.9196655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0117s] [ 86%] 2024-08-07T18:08:36.9197897Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0117s] [ 86%] 2024-08-07T18:08:36.9199119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 86%] 2024-08-07T18:08:36.9200350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0083s] [ 86%] 2024-08-07T18:08:36.9201647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0151s] [ 86%] 2024-08-07T18:08:36.9202965Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0153s] [ 86%] 2024-08-07T18:08:36.9204186Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0092s] [ 86%] 2024-08-07T18:08:36.9205423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 86%] 2024-08-07T18:08:36.9206640Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0115s] [ 86%] 2024-08-07T18:08:36.9207905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0117s] [ 86%] 2024-08-07T18:08:36.9209110Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0088s] [ 86%] 2024-08-07T18:08:36.9210326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 87%] 2024-08-07T18:08:36.9211619Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0156s] [ 87%] 2024-08-07T18:08:36.9212937Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0152s] [ 87%] 2024-08-07T18:08:36.9214168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0095s] [ 87%] 2024-08-07T18:08:36.9215383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 87%] 2024-08-07T18:08:36.9216680Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.1036s] [ 87%] 2024-08-07T18:08:36.9217951Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.4978s] [ 87%] 2024-08-07T18:08:36.9219198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0613s] [ 87%] 2024-08-07T18:08:36.9220449Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0609s] [ 87%] 2024-08-07T18:08:36.9221729Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.1459s] [ 87%] 2024-08-07T18:08:36.9223036Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1501s] [ 87%] 2024-08-07T18:08:36.9224267Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0743s] [ 87%] 2024-08-07T18:08:36.9225526Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0740s] [ 87%] 2024-08-07T18:08:36.9226770Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0855s] [ 87%] 2024-08-07T18:08:36.9228032Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0883s] [ 87%] 2024-08-07T18:08:36.9229250Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0611s] [ 87%] 2024-08-07T18:08:36.9230534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0639s] [ 87%] 2024-08-07T18:08:36.9231803Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.1241s] [ 87%] 2024-08-07T18:08:36.9233027Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1253s] [ 87%] 2024-08-07T18:08:36.9234269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.4372s] [ 87%] 2024-08-07T18:08:36.9235496Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0797s] [ 87%] 2024-08-07T18:08:36.9236744Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.1133s] [ 87%] 2024-08-07T18:08:36.9237990Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.1198s] [ 87%] 2024-08-07T18:08:36.9239274Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0700s] [ 87%] 2024-08-07T18:08:36.9240555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0691s] [ 87%] 2024-08-07T18:08:36.9241812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.1578s] [ 87%] 2024-08-07T18:08:36.9243040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1615s] [ 87%] 2024-08-07T18:08:36.9244271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0852s] [ 87%] 2024-08-07T18:08:36.9245530Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0861s] [ 87%] 2024-08-07T18:08:36.9246759Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0975s] [ 87%] 2024-08-07T18:08:36.9248004Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.4654s] [ 87%] 2024-08-07T18:08:36.9249270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0705s] [ 87%] 2024-08-07T18:08:36.9250562Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0723s] [ 87%] 2024-08-07T18:08:36.9251779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.1309s] [ 87%] 2024-08-07T18:08:36.9253019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1322s] [ 87%] 2024-08-07T18:08:36.9254244Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0861s] [ 87%] 2024-08-07T18:08:36.9255480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0859s] [ 87%] 2024-08-07T18:08:36.9256724Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.1335s] [ 87%] 2024-08-07T18:08:36.9258016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.1387s] [ 87%] 2024-08-07T18:08:36.9259327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0838s] [ 87%] 2024-08-07T18:08:36.9260557Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0830s] [ 87%] 2024-08-07T18:08:36.9261796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.5391s] [ 87%] 2024-08-07T18:08:36.9263033Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1783s] [ 87%] 2024-08-07T18:08:36.9264287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0994s] [ 87%] 2024-08-07T18:08:36.9265517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0991s] [ 87%] 2024-08-07T18:08:36.9266733Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.1153s] [ 87%] 2024-08-07T18:08:36.9268038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.1172s] [ 87%] 2024-08-07T18:08:36.9269306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0822s] [ 87%] 2024-08-07T18:08:36.9270565Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0825s] [ 87%] 2024-08-07T18:08:36.9271789Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.1470s] [ 87%] 2024-08-07T18:08:36.9273037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1479s] [ 87%] 2024-08-07T18:08:36.9274265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.4599s] [ 87%] 2024-08-07T18:08:36.9275509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0998s] [ 87%] 2024-08-07T18:08:36.9276799Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.1001s] [ 87%] 2024-08-07T18:08:36.9278115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.1036s] [ 87%] 2024-08-07T18:08:36.9279334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0585s] [ 87%] 2024-08-07T18:08:36.9280560Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0582s] [ 87%] 2024-08-07T18:08:36.9281810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.1467s] [ 87%] 2024-08-07T18:08:36.9283051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1499s] [ 87%] 2024-08-07T18:08:36.9284295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0749s] [ 87%] 2024-08-07T18:08:36.9285539Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0743s] [ 87%] 2024-08-07T18:08:36.9286816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0856s] [ 87%] 2024-08-07T18:08:36.9288104Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0895s] [ 87%] 2024-08-07T18:08:36.9289337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0605s] [ 87%] 2024-08-07T18:08:36.9290555Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.4296s] [ 87%] 2024-08-07T18:08:36.9291838Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.1193s] [ 87%] 2024-08-07T18:08:36.9293094Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.1204s] [ 87%] 2024-08-07T18:08:36.9294312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0758s] [ 87%] 2024-08-07T18:08:36.9296037Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0755s] [ 87%] 2024-08-07T18:08:36.9297361Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0177s] [ 87%] 2024-08-07T18:08:36.9298627Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0181s] [ 87%] 2024-08-07T18:08:36.9299842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0127s] [ 87%] 2024-08-07T18:08:36.9301119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0129s] [ 87%] 2024-08-07T18:08:36.9302352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0235s] [ 87%] 2024-08-07T18:08:36.9303643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0237s] [ 87%] 2024-08-07T18:08:36.9304878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0149s] [ 87%] 2024-08-07T18:08:36.9306168Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0146s] [ 87%] 2024-08-07T18:08:36.9307470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0180s] [ 87%] 2024-08-07T18:08:36.9308701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0182s] [ 88%] 2024-08-07T18:08:36.9309930Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0134s] [ 88%] 2024-08-07T18:08:36.9311149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0133s] [ 88%] 2024-08-07T18:08:36.9312390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0240s] [ 88%] 2024-08-07T18:08:36.9313608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0239s] [ 88%] 2024-08-07T18:08:36.9314865Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0152s] [ 88%] 2024-08-07T18:08:36.9316204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0150s] [ 88%] 2024-08-07T18:08:36.9317425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0194s] [ 88%] 2024-08-07T18:08:36.9318685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0199s] [ 88%] 2024-08-07T18:08:36.9319903Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0146s] [ 88%] 2024-08-07T18:08:36.9321150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0148s] [ 88%] 2024-08-07T18:08:36.9322369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0256s] [ 88%] 2024-08-07T18:08:36.9323611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0256s] [ 88%] 2024-08-07T18:08:36.9324900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0164s] [ 88%] 2024-08-07T18:08:36.9326190Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0163s] [ 88%] 2024-08-07T18:08:36.9327415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0199s] [ 88%] 2024-08-07T18:08:36.9328652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0204s] [ 88%] 2024-08-07T18:08:36.9329887Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0149s] [ 88%] 2024-08-07T18:08:36.9331109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0146s] [ 88%] 2024-08-07T18:08:36.9332342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0254s] [ 88%] 2024-08-07T18:08:36.9333620Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0253s] [ 88%] 2024-08-07T18:08:36.9334908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0162s] [ 88%] 2024-08-07T18:08:36.9336125Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0162s] [ 88%] 2024-08-07T18:08:36.9337337Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0227s] [ 88%] 2024-08-07T18:08:36.9338600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0240s] [ 88%] 2024-08-07T18:08:36.9339822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0173s] [ 88%] 2024-08-07T18:08:36.9341067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0176s] [ 88%] 2024-08-07T18:08:36.9342286Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0282s] [ 88%] 2024-08-07T18:08:36.9343571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0284s] [ 88%] 2024-08-07T18:08:36.9344852Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0195s] [ 88%] 2024-08-07T18:08:36.9346098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0189s] [ 88%] 2024-08-07T18:08:36.9347308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0235s] [ 88%] 2024-08-07T18:08:36.9348545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0240s] [ 88%] 2024-08-07T18:08:36.9349783Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0178s] [ 88%] 2024-08-07T18:08:36.9351002Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0174s] [ 88%] 2024-08-07T18:08:36.9352276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0295s] [ 88%] 2024-08-07T18:08:36.9353551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0296s] [ 88%] 2024-08-07T18:08:36.9354782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0193s] [ 88%] 2024-08-07T18:08:36.9355998Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0192s] [ 88%] 2024-08-07T18:08:36.9357235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0164s] [ 88%] 2024-08-07T18:08:36.9358488Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0167s] [ 88%] 2024-08-07T18:08:36.9359700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0116s] [ 88%] 2024-08-07T18:08:36.9360935Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0117s] [ 88%] 2024-08-07T18:08:36.9362201Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0220s] [ 88%] 2024-08-07T18:08:36.9363498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0224s] [ 88%] 2024-08-07T18:08:36.9364715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0136s] [ 88%] 2024-08-07T18:08:36.9365960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0133s] [ 88%] 2024-08-07T18:08:36.9367171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0167s] [ 88%] 2024-08-07T18:08:36.9368439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0171s] [ 88%] 2024-08-07T18:08:36.9369671Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0121s] [ 88%] 2024-08-07T18:08:36.9370931Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0124s] [ 88%] 2024-08-07T18:08:36.9372214Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0225s] [ 88%] 2024-08-07T18:08:36.9373431Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0229s] [ 88%] 2024-08-07T18:08:36.9374658Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0135s] [ 88%] 2024-08-07T18:08:36.9375879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0137s] [ 88%] 2024-08-07T18:08:36.9377117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0126s] [ 88%] 2024-08-07T18:08:36.9378358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0128s] [ 88%] 2024-08-07T18:08:36.9379588Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0090s] [ 88%] 2024-08-07T18:08:36.9380854Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 88%] 2024-08-07T18:08:36.9382135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0163s] [ 88%] 2024-08-07T18:08:36.9383378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0166s] [ 88%] 2024-08-07T18:08:36.9384611Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0097s] [ 88%] 2024-08-07T18:08:36.9385864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0096s] [ 88%] 2024-08-07T18:08:36.9387088Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0129s] [ 88%] 2024-08-07T18:08:36.9388342Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0127s] [ 88%] 2024-08-07T18:08:36.9389605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0092s] [ 88%] 2024-08-07T18:08:36.9390893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 88%] 2024-08-07T18:08:36.9392109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0167s] [ 88%] 2024-08-07T18:08:36.9393325Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0165s] [ 88%] 2024-08-07T18:08:36.9394561Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0097s] [ 88%] 2024-08-07T18:08:36.9396233Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0098s] [ 88%] 2024-08-07T18:08:36.9397495Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0149s] [ 88%] 2024-08-07T18:08:36.9398736Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0149s] [ 88%] 2024-08-07T18:08:36.9400059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0104s] [ 88%] 2024-08-07T18:08:36.9401374Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0103s] [ 88%] 2024-08-07T18:08:36.9402614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0184s] [ 88%] 2024-08-07T18:08:36.9403834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0186s] [ 88%] 2024-08-07T18:08:36.9405059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0109s] [ 89%] 2024-08-07T18:08:36.9406311Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0109s] [ 89%] 2024-08-07T18:08:36.9407523Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0147s] [ 89%] 2024-08-07T18:08:36.9408779Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0147s] [ 89%] 2024-08-07T18:08:36.9410052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0106s] [ 89%] 2024-08-07T18:08:36.9411357Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0105s] [ 89%] 2024-08-07T18:08:36.9412571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0188s] [ 89%] 2024-08-07T18:08:36.9413819Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0188s] [ 89%] 2024-08-07T18:08:36.9415040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0110s] [ 89%] 2024-08-07T18:08:36.9416299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0111s] [ 89%] 2024-08-07T18:08:36.9417538Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0187s] [ 89%] 2024-08-07T18:08:36.9418834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0188s] [ 89%] 2024-08-07T18:08:36.9420129Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0132s] [ 89%] 2024-08-07T18:08:36.9421365Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0135s] [ 89%] 2024-08-07T18:08:36.9422601Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0226s] [ 89%] 2024-08-07T18:08:36.9423830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0227s] [ 89%] 2024-08-07T18:08:36.9425073Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0140s] [ 89%] 2024-08-07T18:08:36.9426297Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0146s] [ 89%] 2024-08-07T18:08:36.9427501Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0187s] [ 89%] 2024-08-07T18:08:36.9428794Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0188s] [ 89%] 2024-08-07T18:08:36.9430049Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0134s] [ 89%] 2024-08-07T18:08:36.9431280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0134s] [ 89%] 2024-08-07T18:08:36.9432491Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0225s] [ 89%] 2024-08-07T18:08:36.9433734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0223s] [ 89%] 2024-08-07T18:08:36.9434941Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0140s] [ 89%] 2024-08-07T18:08:36.9436170Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0140s] [ 89%] 2024-08-07T18:08:36.9437421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0115s] [ 89%] 2024-08-07T18:08:36.9438700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0117s] [ 89%] 2024-08-07T18:08:36.9439929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0087s] [ 89%] 2024-08-07T18:08:36.9441148Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 89%] 2024-08-07T18:08:36.9442384Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0157s] [ 89%] 2024-08-07T18:08:36.9443615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0156s] [ 89%] 2024-08-07T18:08:36.9444849Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0090s] [ 89%] 2024-08-07T18:08:36.9446067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 89%] 2024-08-07T18:08:36.9447333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0118s] [ 89%] 2024-08-07T18:08:36.9448598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0119s] [ 89%] 2024-08-07T18:08:36.9449806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0090s] [ 89%] 2024-08-07T18:08:36.9451035Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 89%] 2024-08-07T18:08:36.9452248Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0159s] [ 89%] 2024-08-07T18:08:36.9453489Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0160s] [ 89%] 2024-08-07T18:08:36.9454717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0090s] [ 89%] 2024-08-07T18:08:36.9455997Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_512_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 89%] 2024-08-07T18:08:36.9457261Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 89%] 2024-08-07T18:08:36.9458525Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 89%] 2024-08-07T18:08:36.9459741Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 89%] 2024-08-07T18:08:36.9460964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 89%] 2024-08-07T18:08:36.9462210Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0089s] [ 89%] 2024-08-07T18:08:36.9463439Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 89%] 2024-08-07T18:08:36.9464676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 89%] 2024-08-07T18:08:36.9465947Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 89%] 2024-08-07T18:08:36.9467235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 89%] 2024-08-07T18:08:36.9468466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 89%] 2024-08-07T18:08:36.9469698Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 89%] 2024-08-07T18:08:36.9470917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 89%] 2024-08-07T18:08:36.9472137Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0083s] [ 89%] 2024-08-07T18:08:36.9473376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 89%] 2024-08-07T18:08:36.9474643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0077s] [ 89%] 2024-08-07T18:08:36.9475932Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 89%] 2024-08-07T18:08:36.9477145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0077s] [ 89%] 2024-08-07T18:08:36.9478390Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 89%] 2024-08-07T18:08:36.9479613Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 89%] 2024-08-07T18:08:36.9480869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 89%] 2024-08-07T18:08:36.9482086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0094s] [ 89%] 2024-08-07T18:08:36.9483307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0096s] [ 89%] 2024-08-07T18:08:36.9484597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0079s] [ 89%] 2024-08-07T18:08:36.9485878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 89%] 2024-08-07T18:08:36.9487109Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 89%] 2024-08-07T18:08:36.9488329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 89%] 2024-08-07T18:08:36.9489559Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 89%] 2024-08-07T18:08:36.9490782Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 89%] 2024-08-07T18:08:36.9492012Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 89%] 2024-08-07T18:08:36.9493276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 89%] 2024-08-07T18:08:36.9494548Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0080s] [ 89%] 2024-08-07T18:08:36.9496141Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 89%] 2024-08-07T18:08:36.9497386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0095s] [ 89%] 2024-08-07T18:08:36.9498637Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0096s] [ 89%] 2024-08-07T18:08:36.9499868Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0078s] [ 89%] 2024-08-07T18:08:36.9501112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 90%] 2024-08-07T18:08:36.9502333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0113s] [ 90%] 2024-08-07T18:08:36.9503661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0111s] [ 90%] 2024-08-07T18:08:36.9504975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0085s] [ 90%] 2024-08-07T18:08:36.9506203Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0086s] [ 90%] 2024-08-07T18:08:36.9507433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0086s] [ 90%] 2024-08-07T18:08:36.9508659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0086s] [ 90%] 2024-08-07T18:08:36.9509891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0081s] [ 90%] 2024-08-07T18:08:36.9511107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 90%] 2024-08-07T18:08:36.9512440Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0099s] [ 90%] 2024-08-07T18:08:36.9513747Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0099s] [ 90%] 2024-08-07T18:08:36.9514983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0091s] [ 90%] 2024-08-07T18:08:36.9516243Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 90%] 2024-08-07T18:08:36.9517471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 90%] 2024-08-07T18:08:36.9518725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 90%] 2024-08-07T18:08:36.9519952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 90%] 2024-08-07T18:08:36.9521199Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 90%] 2024-08-07T18:08:36.9523130Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0089s] [ 90%] 2024-08-07T18:08:36.9524474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 90%] 2024-08-07T18:08:36.9525691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0071s] [ 90%] 2024-08-07T18:08:36.9526933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 90%] 2024-08-07T18:08:36.9528147Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0069s] [ 90%] 2024-08-07T18:08:36.9529367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 90%] 2024-08-07T18:08:36.9530604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 90%] 2024-08-07T18:08:36.9531869Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 90%] 2024-08-07T18:08:36.9533155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 90%] 2024-08-07T18:08:36.9534370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 90%] 2024-08-07T18:08:36.9535599Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0072s] [ 90%] 2024-08-07T18:08:36.9536822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 90%] 2024-08-07T18:08:36.9538069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0111s] [ 90%] 2024-08-07T18:08:36.9539288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 90%] 2024-08-07T18:08:36.9540514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0080s] [ 90%] 2024-08-07T18:08:36.9541802Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 90%] 2024-08-07T18:08:36.9543097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0138s] [ 90%] 2024-08-07T18:08:36.9544344Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0139s] [ 90%] 2024-08-07T18:08:36.9545567Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0093s] [ 90%] 2024-08-07T18:08:36.9553371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0088s] [ 90%] 2024-08-07T18:08:36.9554753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0090s] [ 90%] 2024-08-07T18:08:36.9556009Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0094s] [ 90%] 2024-08-07T18:08:36.9557350Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0083s] [ 90%] 2024-08-07T18:08:36.9558661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 90%] 2024-08-07T18:08:36.9559879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0105s] [ 90%] 2024-08-07T18:08:36.9561120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0105s] [ 90%] 2024-08-07T18:08:36.9562345Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0094s] [ 90%] 2024-08-07T18:08:36.9563584Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 90%] 2024-08-07T18:08:36.9564822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0119s] [ 90%] 2024-08-07T18:08:36.9566052Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0124s] [ 90%] 2024-08-07T18:08:36.9567359Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0090s] [ 90%] 2024-08-07T18:08:36.9568638Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 90%] 2024-08-07T18:08:36.9569881Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0147s] [ 90%] 2024-08-07T18:08:36.9571107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0148s] [ 90%] 2024-08-07T18:08:36.9572358Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0100s] [ 90%] 2024-08-07T18:08:36.9573600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0099s] [ 90%] 2024-08-07T18:08:36.9574817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0100s] [ 90%] 2024-08-07T18:08:36.9576099Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 90%] 2024-08-07T18:08:36.9577383Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0092s] [ 90%] 2024-08-07T18:08:36.9578623Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 90%] 2024-08-07T18:08:36.9579840Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0112s] [ 90%] 2024-08-07T18:08:36.9581087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 90%] 2024-08-07T18:08:36.9582312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0102s] [ 90%] 2024-08-07T18:08:36.9583553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0102s] [ 90%] 2024-08-07T18:08:36.9584795Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0138s] [ 90%] 2024-08-07T18:08:36.9586071Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0143s] [ 90%] 2024-08-07T18:08:36.9587376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0103s] [ 90%] 2024-08-07T18:08:36.9588597Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0105s] [ 90%] 2024-08-07T18:08:36.9589841Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0169s] [ 90%] 2024-08-07T18:08:36.9591077Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0169s] [ 90%] 2024-08-07T18:08:36.9592326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0117s] [ 90%] 2024-08-07T18:08:36.9593556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0121s] [ 90%] 2024-08-07T18:08:36.9594828Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0116s] [ 90%] 2024-08-07T18:08:36.9596827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0114s] [ 90%] 2024-08-07T18:08:36.9598070Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0104s] [ 90%] 2024-08-07T18:08:36.9599315Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0103s] [ 90%] 2024-08-07T18:08:36.9600533Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0131s] [ 90%] 2024-08-07T18:08:36.9601781Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0135s] [ 90%] 2024-08-07T18:08:36.9602996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0117s] [ 90%] 2024-08-07T18:08:36.9604234Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0116s] [ 90%] 2024-08-07T18:08:36.9605624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0106s] [ 90%] 2024-08-07T18:08:36.9606992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0107s] [ 91%] 2024-08-07T18:08:36.9608209Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0079s] [ 91%] 2024-08-07T18:08:36.9609429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 91%] 2024-08-07T18:08:36.9610677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0133s] [ 91%] 2024-08-07T18:08:36.9611917Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0135s] [ 91%] 2024-08-07T18:08:36.9613144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0088s] [ 91%] 2024-08-07T18:08:36.9614443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 91%] 2024-08-07T18:08:36.9615791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0088s] [ 91%] 2024-08-07T18:08:36.9617016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0087s] [ 91%] 2024-08-07T18:08:36.9618260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0080s] [ 91%] 2024-08-07T18:08:36.9619480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0079s] [ 91%] 2024-08-07T18:08:36.9620700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0102s] [ 91%] 2024-08-07T18:08:36.9621938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0102s] [ 91%] 2024-08-07T18:08:36.9623155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0091s] [ 91%] 2024-08-07T18:08:36.9624448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0090s] [ 91%] 2024-08-07T18:08:36.9625715Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 91%] 2024-08-07T18:08:36.9626948Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 91%] 2024-08-07T18:08:36.9628236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 91%] 2024-08-07T18:08:36.9629474Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 91%] 2024-08-07T18:08:36.9630719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 91%] 2024-08-07T18:08:36.9631920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 91%] 2024-08-07T18:08:36.9633202Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0062s] [ 91%] 2024-08-07T18:08:36.9634479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 91%] 2024-08-07T18:08:36.9635703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 91%] 2024-08-07T18:08:36.9636909Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 91%] 2024-08-07T18:08:36.9638158Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 91%] 2024-08-07T18:08:36.9639377Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 91%] 2024-08-07T18:08:36.9640598Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 91%] 2024-08-07T18:08:36.9641811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 91%] 2024-08-07T18:08:36.9643066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 91%] 2024-08-07T18:08:36.9644348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 91%] 2024-08-07T18:08:36.9645558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 91%] 2024-08-07T18:08:36.9646791Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 91%] 2024-08-07T18:08:36.9648023Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 91%] 2024-08-07T18:08:36.9649269Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 91%] 2024-08-07T18:08:36.9650480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 91%] 2024-08-07T18:08:36.9651720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 91%] 2024-08-07T18:08:36.9652980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0062s] [ 91%] 2024-08-07T18:08:36.9654259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 91%] 2024-08-07T18:08:36.9655478Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 91%] 2024-08-07T18:08:36.9656695Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 91%] 2024-08-07T18:08:36.9657949Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 91%] 2024-08-07T18:08:36.9659164Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 91%] 2024-08-07T18:08:36.9660385Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 91%] 2024-08-07T18:08:36.9661646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 91%] 2024-08-07T18:08:36.9662923Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 91%] 2024-08-07T18:08:36.9664161Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 91%] 2024-08-07T18:08:36.9665379Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 91%] 2024-08-07T18:08:36.9666604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 91%] 2024-08-07T18:08:36.9667844Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 91%] 2024-08-07T18:08:36.9669075Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 91%] 2024-08-07T18:08:36.9670285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0075s] [ 91%] 2024-08-07T18:08:36.9671563Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 91%] 2024-08-07T18:08:36.9672826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 91%] 2024-08-07T18:08:36.9674067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 91%] 2024-08-07T18:08:36.9675265Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 91%] 2024-08-07T18:08:36.9676476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 91%] 2024-08-07T18:08:36.9677717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 91%] 2024-08-07T18:08:36.9678924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 91%] 2024-08-07T18:08:36.9680205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0076s] [ 91%] 2024-08-07T18:08:36.9681476Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 91%] 2024-08-07T18:08:36.9682689Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 91%] 2024-08-07T18:08:36.9683894Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 91%] 2024-08-07T18:08:36.9685119Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0056s] [ 91%] 2024-08-07T18:08:36.9686334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 91%] 2024-08-07T18:08:36.9687544Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0054s] [ 91%] 2024-08-07T18:08:36.9688768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 91%] 2024-08-07T18:08:36.9690045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 91%] 2024-08-07T18:08:36.9691332Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 91%] 2024-08-07T18:08:36.9692535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 91%] 2024-08-07T18:08:36.9693758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 91%] 2024-08-07T18:08:36.9694959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 91%] 2024-08-07T18:08:36.9696655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 91%] 2024-08-07T18:08:36.9697874Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 91%] 2024-08-07T18:08:36.9699172Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 91%] 2024-08-07T18:08:36.9700471Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 91%] 2024-08-07T18:08:36.9701672Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 91%] 2024-08-07T18:08:36.9702893Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 92%] 2024-08-07T18:08:36.9704112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 92%] 2024-08-07T18:08:36.9705364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0180s] [ 92%] 2024-08-07T18:08:36.9706590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0194s] [ 92%] 2024-08-07T18:08:36.9707826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0120s] [ 92%] 2024-08-07T18:08:36.9709117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0119s] [ 92%] 2024-08-07T18:08:36.9710415Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0235s] [ 92%] 2024-08-07T18:08:36.9711661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0238s] [ 92%] 2024-08-07T18:08:36.9712882Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0140s] [ 92%] 2024-08-07T18:08:36.9714139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0141s] [ 92%] 2024-08-07T18:08:36.9715364Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0130s] [ 92%] 2024-08-07T18:08:36.9716643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0131s] [ 92%] 2024-08-07T18:08:36.9717916Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0124s] [ 92%] 2024-08-07T18:08:36.9719220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0131s] [ 92%] 2024-08-07T18:08:36.9720436Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0151s] [ 92%] 2024-08-07T18:08:36.9721656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0153s] [ 92%] 2024-08-07T18:08:36.9722900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0137s] [ 92%] 2024-08-07T18:08:36.9724139Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0140s] [ 92%] 2024-08-07T18:08:36.9725368Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0199s] [ 92%] 2024-08-07T18:08:36.9726592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0205s] [ 92%] 2024-08-07T18:08:36.9727878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0135s] [ 92%] 2024-08-07T18:08:36.9729169Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0135s] [ 92%] 2024-08-07T18:08:36.9730421Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0253s] [ 92%] 2024-08-07T18:08:36.9731633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0259s] [ 92%] 2024-08-07T18:08:36.9732860Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0156s] [ 92%] 2024-08-07T18:08:36.9734115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0155s] [ 92%] 2024-08-07T18:08:36.9735327Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0149s] [ 92%] 2024-08-07T18:08:36.9736615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0146s] [ 92%] 2024-08-07T18:08:36.9737900Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0140s] [ 92%] 2024-08-07T18:08:36.9739143Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0135s] [ 92%] 2024-08-07T18:08:36.9740354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0173s] [ 92%] 2024-08-07T18:08:36.9741600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0169s] [ 92%] 2024-08-07T18:08:36.9742822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0159s] [ 92%] 2024-08-07T18:08:36.9744040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0156s] [ 92%] 2024-08-07T18:08:36.9745271Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0235s] [ 92%] 2024-08-07T18:08:36.9746543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0246s] [ 92%] 2024-08-07T18:08:36.9747843Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0165s] [ 92%] 2024-08-07T18:08:36.9749081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0168s] [ 92%] 2024-08-07T18:08:36.9750317Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0289s] [ 92%] 2024-08-07T18:08:36.9751552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0296s] [ 92%] 2024-08-07T18:08:36.9752796Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0188s] [ 92%] 2024-08-07T18:08:36.9754030Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0184s] [ 92%] 2024-08-07T18:08:36.9755291Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0174s] [ 92%] 2024-08-07T18:08:36.9756583Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0179s] [ 92%] 2024-08-07T18:08:36.9757798Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0164s] [ 92%] 2024-08-07T18:08:36.9759045Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0166s] [ 92%] 2024-08-07T18:08:36.9760256Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0201s] [ 92%] 2024-08-07T18:08:36.9761503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0199s] [ 92%] 2024-08-07T18:08:36.9762717Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0184s] [ 92%] 2024-08-07T18:08:36.9763950Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0186s] [ 92%] 2024-08-07T18:08:36.9765228Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0169s] [ 92%] 2024-08-07T18:08:36.9766503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0180s] [ 92%] 2024-08-07T18:08:36.9767734Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0113s] [ 92%] 2024-08-07T18:08:36.9768964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0112s] [ 92%] 2024-08-07T18:08:36.9770206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0225s] [ 92%] 2024-08-07T18:08:36.9771441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0230s] [ 92%] 2024-08-07T18:08:36.9772679Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0134s] [ 92%] 2024-08-07T18:08:36.9773952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0133s] [ 92%] 2024-08-07T18:08:36.9775229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0125s] [ 92%] 2024-08-07T18:08:36.9776448Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0126s] [ 92%] 2024-08-07T18:08:36.9777681Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0115s] [ 92%] 2024-08-07T18:08:36.9778938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0117s] [ 92%] 2024-08-07T18:08:36.9780162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0146s] [ 92%] 2024-08-07T18:08:36.9781400Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0146s] [ 92%] 2024-08-07T18:08:36.9782610Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0133s] [ 92%] 2024-08-07T18:08:36.9783889Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0133s] [ 92%] 2024-08-07T18:08:36.9785149Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 92%] 2024-08-07T18:08:36.9786386Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 92%] 2024-08-07T18:08:36.9787600Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 92%] 2024-08-07T18:08:36.9788857Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 92%] 2024-08-07T18:08:36.9790086Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 92%] 2024-08-07T18:08:36.9791308Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 92%] 2024-08-07T18:08:36.9792543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 92%] 2024-08-07T18:08:36.9793834Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 92%] 2024-08-07T18:08:36.9795503Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 92%] 2024-08-07T18:08:36.9796751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 92%] 2024-08-07T18:08:36.9797983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 92%] 2024-08-07T18:08:36.9799219Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 92%] 2024-08-07T18:08:36.9800446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 93%] 2024-08-07T18:08:36.9801663Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 93%] 2024-08-07T18:08:36.9802964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 93%] 2024-08-07T18:08:36.9804276Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 93%] 2024-08-07T18:08:36.9805493Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 93%] 2024-08-07T18:08:36.9806721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 93%] 2024-08-07T18:08:36.9807938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 93%] 2024-08-07T18:08:36.9809206Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 93%] 2024-08-07T18:08:36.9810420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 93%] 2024-08-07T18:08:36.9811656Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 93%] 2024-08-07T18:08:36.9812933Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 93%] 2024-08-07T18:08:36.9814222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 93%] 2024-08-07T18:08:36.9815443Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 93%] 2024-08-07T18:08:36.9816700Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 93%] 2024-08-07T18:08:36.9817962Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 93%] 2024-08-07T18:08:36.9819200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 93%] 2024-08-07T18:08:36.9820414Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0078s] [ 93%] 2024-08-07T18:08:36.9821691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 93%] 2024-08-07T18:08:36.9822973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 93%] 2024-08-07T18:08:36.9824185Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 93%] 2024-08-07T18:08:36.9825396Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 93%] 2024-08-07T18:08:36.9826634Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 93%] 2024-08-07T18:08:36.9827861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 93%] 2024-08-07T18:08:36.9829113Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 93%] 2024-08-07T18:08:36.9830324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 93%] 2024-08-07T18:08:36.9831609Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0083s] [ 93%] 2024-08-07T18:08:36.9832873Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 93%] 2024-08-07T18:08:36.9834112Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 93%] 2024-08-07T18:08:36.9835321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0073s] [ 93%] 2024-08-07T18:08:36.9836540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 93%] 2024-08-07T18:08:36.9837758Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 93%] 2024-08-07T18:08:36.9839005Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 93%] 2024-08-07T18:08:36.9840270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0085s] [ 93%] 2024-08-07T18:08:36.9841534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0084s] [ 93%] 2024-08-07T18:08:36.9842761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 93%] 2024-08-07T18:08:36.9843975Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 93%] 2024-08-07T18:08:36.9845204Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 93%] 2024-08-07T18:08:36.9846425Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 93%] 2024-08-07T18:08:36.9847628Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 93%] 2024-08-07T18:08:36.9848879Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 93%] 2024-08-07T18:08:36.9850135Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 93%] 2024-08-07T18:08:36.9851423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 93%] 2024-08-07T18:08:36.9852630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 93%] 2024-08-07T18:08:36.9853864Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 93%] 2024-08-07T18:08:36.9855069Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 93%] 2024-08-07T18:08:36.9856296Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 93%] 2024-08-07T18:08:36.9857497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 93%] 2024-08-07T18:08:36.9858774Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 93%] 2024-08-07T18:08:36.9860083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 93%] 2024-08-07T18:08:36.9861295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 93%] 2024-08-07T18:08:36.9862512Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 93%] 2024-08-07T18:08:36.9863726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 93%] 2024-08-07T18:08:36.9864956Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 93%] 2024-08-07T18:08:36.9866165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 93%] 2024-08-07T18:08:36.9867404Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 93%] 2024-08-07T18:08:36.9868691Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 93%] 2024-08-07T18:08:36.9869958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 93%] 2024-08-07T18:08:36.9871194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 93%] 2024-08-07T18:08:36.9872401Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 93%] 2024-08-07T18:08:36.9873643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 93%] 2024-08-07T18:08:36.9874842Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 93%] 2024-08-07T18:08:36.9876062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 93%] 2024-08-07T18:08:36.9877314Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 93%] 2024-08-07T18:08:36.9878604Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 93%] 2024-08-07T18:08:36.9879861Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 93%] 2024-08-07T18:08:36.9881068Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 93%] 2024-08-07T18:08:36.9882295Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 93%] 2024-08-07T18:08:36.9883516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 93%] 2024-08-07T18:08:36.9884738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 93%] 2024-08-07T18:08:36.9885945Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 93%] 2024-08-07T18:08:36.9887216Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 93%] 2024-08-07T18:08:36.9888509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 93%] 2024-08-07T18:08:36.9889725Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 93%] 2024-08-07T18:08:36.9890959Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 93%] 2024-08-07T18:08:36.9892179Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 93%] 2024-08-07T18:08:36.9893426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 93%] 2024-08-07T18:08:36.9894646Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 93%] 2024-08-07T18:08:36.9896298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 94%] 2024-08-07T18:08:36.9897614Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 94%] 2024-08-07T18:08:36.9898924Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 94%] 2024-08-07T18:08:36.9900131Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 94%] 2024-08-07T18:08:36.9901349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 94%] 2024-08-07T18:08:36.9902587Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 94%] 2024-08-07T18:08:36.9903797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 94%] 2024-08-07T18:08:36.9905025Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 94%] 2024-08-07T18:08:36.9906299Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 94%] 2024-08-07T18:08:36.9907592Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 94%] 2024-08-07T18:08:36.9908806Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 94%] 2024-08-07T18:08:36.9910034Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 94%] 2024-08-07T18:08:36.9911258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 94%] 2024-08-07T18:08:36.9912479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 94%] 2024-08-07T18:08:36.9913712Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 94%] 2024-08-07T18:08:36.9914925Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 94%] 2024-08-07T18:08:36.9916236Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 94%] 2024-08-07T18:08:36.9917505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 94%] 2024-08-07T18:08:36.9918737Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 94%] 2024-08-07T18:08:36.9919952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0079s] [ 94%] 2024-08-07T18:08:36.9921191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 94%] 2024-08-07T18:08:36.9922398Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 94%] 2024-08-07T18:08:36.9923605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 94%] 2024-08-07T18:08:36.9924878Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 94%] 2024-08-07T18:08:36.9926145Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 94%] 2024-08-07T18:08:36.9927370Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 94%] 2024-08-07T18:08:36.9928580Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 94%] 2024-08-07T18:08:36.9929832Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 94%] 2024-08-07T18:08:36.9931054Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 94%] 2024-08-07T18:08:36.9932280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 94%] 2024-08-07T18:08:36.9933490Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 94%] 2024-08-07T18:08:36.9934735Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 94%] 2024-08-07T18:08:36.9936013Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 94%] 2024-08-07T18:08:36.9937207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 94%] 2024-08-07T18:08:36.9938429Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 94%] 2024-08-07T18:08:36.9939657Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 94%] 2024-08-07T18:08:36.9940899Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 94%] 2024-08-07T18:08:36.9942103Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 94%] 2024-08-07T18:08:36.9943371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_64_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 94%] 2024-08-07T18:08:36.9944643Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 94%] 2024-08-07T18:08:36.9945872Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 94%] 2024-08-07T18:08:36.9947102Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 94%] 2024-08-07T18:08:36.9948329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 94%] 2024-08-07T18:08:36.9949596Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 94%] 2024-08-07T18:08:36.9950825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 94%] 2024-08-07T18:08:36.9952063Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 94%] 2024-08-07T18:08:36.9953333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 94%] 2024-08-07T18:08:36.9954615Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 94%] 2024-08-07T18:08:36.9955824Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 94%] 2024-08-07T18:08:36.9957029Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 94%] 2024-08-07T18:08:36.9958278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 94%] 2024-08-07T18:08:36.9959517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 94%] 2024-08-07T18:08:36.9960751Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 94%] 2024-08-07T18:08:36.9962024Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 94%] 2024-08-07T18:08:36.9963318Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 94%] 2024-08-07T18:08:36.9964535Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 94%] 2024-08-07T18:08:36.9965768Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 94%] 2024-08-07T18:08:36.9966981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 94%] 2024-08-07T18:08:36.9968211Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 94%] 2024-08-07T18:08:36.9969470Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 94%] 2024-08-07T18:08:36.9970697Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 94%] 2024-08-07T18:08:36.9971980Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 94%] 2024-08-07T18:08:36.9973259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 94%] 2024-08-07T18:08:36.9974486Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0065s] [ 94%] 2024-08-07T18:08:36.9975699Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 94%] 2024-08-07T18:08:36.9976926Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0065s] [ 94%] 2024-08-07T18:08:36.9978198Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 94%] 2024-08-07T18:08:36.9979438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 94%] 2024-08-07T18:08:36.9980727Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 94%] 2024-08-07T18:08:36.9981996Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 94%] 2024-08-07T18:08:36.9983242Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 94%] 2024-08-07T18:08:36.9984458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 94%] 2024-08-07T18:08:36.9985701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 94%] 2024-08-07T18:08:36.9986920Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 94%] 2024-08-07T18:08:36.9988165Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 94%] 2024-08-07T18:08:36.9989403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 94%] 2024-08-07T18:08:36.9990677Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0080s] [ 94%] 2024-08-07T18:08:36.9991964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0074s] [ 95%] 2024-08-07T18:08:36.9993192Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 95%] 2024-08-07T18:08:36.9994434Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 95%] 2024-08-07T18:08:36.9996038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 95%] 2024-08-07T18:08:36.9997304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 95%] 2024-08-07T18:08:36.9998515Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 95%] 2024-08-07T18:08:36.9999767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 95%] 2024-08-07T18:08:37.0001080Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 95%] 2024-08-07T18:08:37.0002367Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0075s] [ 95%] 2024-08-07T18:08:37.0003603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0078s] [ 95%] 2024-08-07T18:08:37.0004826Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 95%] 2024-08-07T18:08:37.0006067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 95%] 2024-08-07T18:08:37.0007278Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 95%] 2024-08-07T18:08:37.0008516Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 95%] 2024-08-07T18:08:37.0009817Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 95%] 2024-08-07T18:08:37.0011128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 95%] 2024-08-07T18:08:37.0012353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0063s] [ 95%] 2024-08-07T18:08:37.0013568Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 95%] 2024-08-07T18:08:37.0014801Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 95%] 2024-08-07T18:08:37.0016056Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 95%] 2024-08-07T18:08:37.0017287Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 95%] 2024-08-07T18:08:37.0018497Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 95%] 2024-08-07T18:08:37.0019792Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 95%] 2024-08-07T18:08:37.0021067Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 95%] 2024-08-07T18:08:37.0022293Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 95%] 2024-08-07T18:08:37.0023511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_128_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 95%] 2024-08-07T18:08:37.0024738Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0081s] [ 95%] 2024-08-07T18:08:37.0025981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 95%] 2024-08-07T18:08:37.0027194Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 95%] 2024-08-07T18:08:37.0028485Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 95%] 2024-08-07T18:08:37.0029785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0087s] [ 95%] 2024-08-07T18:08:37.0031038Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 95%] 2024-08-07T18:08:37.0032254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 95%] 2024-08-07T18:08:37.0033502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 95%] 2024-08-07T18:08:37.0034721Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0067s] [ 95%] 2024-08-07T18:08:37.0035938Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 95%] 2024-08-07T18:08:37.0037159Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 95%] 2024-08-07T18:08:37.0038426Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 95%] 2024-08-07T18:08:37.0039720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 95%] 2024-08-07T18:08:37.0040942Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0075s] [ 95%] 2024-08-07T18:08:37.0042176Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 95%] 2024-08-07T18:08:37.0043403Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 95%] 2024-08-07T18:08:37.0044642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0087s] [ 95%] 2024-08-07T18:08:37.0045862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 95%] 2024-08-07T18:08:37.0047121Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 95%] 2024-08-07T18:08:37.0048418Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 95%] 2024-08-07T18:08:37.0049652Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0096s] [ 95%] 2024-08-07T18:08:37.0050896Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0096s] [ 95%] 2024-08-07T18:08:37.0052117Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0073s] [ 95%] 2024-08-07T18:08:37.0053369Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 95%] 2024-08-07T18:08:37.0054579Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0072s] [ 95%] 2024-08-07T18:08:37.0055805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 95%] 2024-08-07T18:08:37.0057059Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0066s] [ 95%] 2024-08-07T18:08:37.0058331Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0064s] [ 95%] 2024-08-07T18:08:37.0059593Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0077s] [ 95%] 2024-08-07T18:08:37.0060812Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0076s] [ 95%] 2024-08-07T18:08:37.0062047Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0080s] [ 95%] 2024-08-07T18:08:37.0063275Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0077s] [ 95%] 2024-08-07T18:08:37.0064507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0109s] [ 95%] 2024-08-07T18:08:37.0065785Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0106s] [ 95%] 2024-08-07T18:08:37.0067065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 95%] 2024-08-07T18:08:37.0068288Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0074s] [ 95%] 2024-08-07T18:08:37.0069507Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0113s] [ 95%] 2024-08-07T18:08:37.0070757Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0115s] [ 95%] 2024-08-07T18:08:37.0071992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0085s] [ 95%] 2024-08-07T18:08:37.0073235Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 95%] 2024-08-07T18:08:37.0074442Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0084s] [ 95%] 2024-08-07T18:08:37.0075720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0085s] [ 95%] 2024-08-07T18:08:37.0076987Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 95%] 2024-08-07T18:08:37.0078223Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0072s] [ 95%] 2024-08-07T18:08:37.0079433Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0089s] [ 95%] 2024-08-07T18:08:37.0080659Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0089s] [ 95%] 2024-08-07T18:08:37.0081902Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0084s] [ 95%] 2024-08-07T18:08:37.0083120Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 95%] 2024-08-07T18:08:37.0084395Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 95%] 2024-08-07T18:08:37.0085665Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 95%] 2024-08-07T18:08:37.0086895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 95%] 2024-08-07T18:08:37.0088108Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 95%] 2024-08-07T18:08:37.0089349Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0080s] [ 96%] 2024-08-07T18:08:37.0090590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 96%] 2024-08-07T18:08:37.0091805Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 96%] 2024-08-07T18:08:37.0093044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 96%] 2024-08-07T18:08:37.0094298Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 96%] 2024-08-07T18:08:37.0095960Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 96%] 2024-08-07T18:08:37.0097191Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0064s] [ 96%] 2024-08-07T18:08:37.0098417Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 96%] 2024-08-07T18:08:37.0099633Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0073s] [ 96%] 2024-08-07T18:08:37.0100891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 96%] 2024-08-07T18:08:37.0102097Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 96%] 2024-08-07T18:08:37.0103312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_256_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 96%] 2024-08-07T18:08:37.0104642Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 96%] 2024-08-07T18:08:37.0105927Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 96%] 2024-08-07T18:08:37.0107152Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0056s] [ 96%] 2024-08-07T18:08:37.0108378Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 96%] 2024-08-07T18:08:37.0109617Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 96%] 2024-08-07T18:08:37.0110855Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 96%] 2024-08-07T18:08:37.0112083Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 96%] 2024-08-07T18:08:37.0113360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 96%] 2024-08-07T18:08:37.0114630Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 96%] 2024-08-07T18:08:37.0115891Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 96%] 2024-08-07T18:08:37.0117098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 96%] 2024-08-07T18:08:37.0118326Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 96%] 2024-08-07T18:08:37.0119608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 96%] 2024-08-07T18:08:37.0120853Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 96%] 2024-08-07T18:08:37.0122057Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 96%] 2024-08-07T18:08:37.0123338Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 96%] 2024-08-07T18:08:37.0124595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 96%] 2024-08-07T18:08:37.0125810Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 96%] 2024-08-07T18:08:37.0127040Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 96%] 2024-08-07T18:08:37.0128257Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 96%] 2024-08-07T18:08:37.0129500Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 96%] 2024-08-07T18:08:37.0130742Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 96%] 2024-08-07T18:08:37.0132019Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 96%] 2024-08-07T18:08:37.0133282Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 96%] 2024-08-07T18:08:37.0134505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 96%] 2024-08-07T18:08:37.0135710Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 96%] 2024-08-07T18:08:37.0136915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 96%] 2024-08-07T18:08:37.0138150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 96%] 2024-08-07T18:08:37.0139355Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 96%] 2024-08-07T18:08:37.0140603Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 96%] 2024-08-07T18:08:37.0141862Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 96%] 2024-08-07T18:08:37.0143150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 96%] 2024-08-07T18:08:37.0144360Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 96%] 2024-08-07T18:08:37.0145590Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 96%] 2024-08-07T18:08:37.0146809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 96%] 2024-08-07T18:08:37.0148020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0056s] [ 96%] 2024-08-07T18:08:37.0149258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 96%] 2024-08-07T18:08:37.0150540Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0064s] [ 96%] 2024-08-07T18:08:37.0151823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 96%] 2024-08-07T18:08:37.0153060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 96%] 2024-08-07T18:08:37.0154258Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 96%] 2024-08-07T18:08:37.0155466Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 96%] 2024-08-07T18:08:37.0156692Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 96%] 2024-08-07T18:08:37.0157895Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 96%] 2024-08-07T18:08:37.0159101Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0071s] [ 96%] 2024-08-07T18:08:37.0160413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 96%] 2024-08-07T18:08:37.0161676Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 96%] 2024-08-07T18:08:37.0162907Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 96%] 2024-08-07T18:08:37.0164107Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0057s] [ 96%] 2024-08-07T18:08:37.0165340Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 96%] 2024-08-07T18:08:37.0166545Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 96%] 2024-08-07T18:08:37.0167776Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0054s] [ 96%] 2024-08-07T18:08:37.0168983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 96%] 2024-08-07T18:08:37.0170260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 96%] 2024-08-07T18:08:37.0171553Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 96%] 2024-08-07T18:08:37.0172762Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 96%] 2024-08-07T18:08:37.0173989Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 96%] 2024-08-07T18:08:37.0175200Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 96%] 2024-08-07T18:08:37.0176413Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0057s] [ 96%] 2024-08-07T18:08:37.0177622Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0060s] [ 96%] 2024-08-07T18:08:37.0178876Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0070s] [ 96%] 2024-08-07T18:08:37.0180133Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 96%] 2024-08-07T18:08:37.0181348Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0065s] [ 96%] 2024-08-07T18:08:37.0182566Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_4_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 96%] 2024-08-07T18:08:37.0183790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0122s] [ 96%] 2024-08-07T18:08:37.0185042Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0123s] [ 97%] 2024-08-07T18:08:37.0186254Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0077s] [ 97%] 2024-08-07T18:08:37.0187494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0076s] [ 97%] 2024-08-07T18:08:37.0188765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0132s] [ 97%] 2024-08-07T18:08:37.0190065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0133s] [ 97%] 2024-08-07T18:08:37.0191303Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 97%] 2024-08-07T18:08:37.0192528Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0085s] [ 97%] 2024-08-07T18:08:37.0193761Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0084s] [ 97%] 2024-08-07T18:08:37.0195307Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0082s] [ 97%] 2024-08-07T18:08:37.0196661Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0072s] [ 97%] 2024-08-07T18:08:37.0197979Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 97%] 2024-08-07T18:08:37.0199279Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0089s] [ 97%] 2024-08-07T18:08:37.0200498Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 97%] 2024-08-07T18:08:37.0201754Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0082s] [ 97%] 2024-08-07T18:08:37.0202983Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0084s] [ 97%] 2024-08-07T18:08:37.0204207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0136s] [ 97%] 2024-08-07T18:08:37.0205455Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0140s] [ 97%] 2024-08-07T18:08:37.0206728Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0086s] [ 97%] 2024-08-07T18:08:37.0208041Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0087s] [ 97%] 2024-08-07T18:08:37.0209334Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0146s] [ 97%] 2024-08-07T18:08:37.0210575Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0152s] [ 97%] 2024-08-07T18:08:37.0211811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0093s] [ 97%] 2024-08-07T18:08:37.0213065Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0095s] [ 97%] 2024-08-07T18:08:37.0214285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0093s] [ 97%] 2024-08-07T18:08:37.0215502Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0093s] [ 97%] 2024-08-07T18:08:37.0216823Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0085s] [ 97%] 2024-08-07T18:08:37.0218098Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0082s] [ 97%] 2024-08-07T18:08:37.0219329Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0100s] [ 97%] 2024-08-07T18:08:37.0220551Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0103s] [ 97%] 2024-08-07T18:08:37.0221814Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0090s] [ 97%] 2024-08-07T18:08:37.0223044Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0094s] [ 97%] 2024-08-07T18:08:37.0224280Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0169s] [ 97%] 2024-08-07T18:08:37.0225505Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0170s] [ 97%] 2024-08-07T18:08:37.0226775Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0108s] [ 97%] 2024-08-07T18:08:37.0228081Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0106s] [ 97%] 2024-08-07T18:08:37.0229306Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0183s] [ 97%] 2024-08-07T18:08:37.0230552Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0185s] [ 97%] 2024-08-07T18:08:37.0231800Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0119s] [ 97%] 2024-08-07T18:08:37.0233062Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0115s] [ 97%] 2024-08-07T18:08:37.0234264Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0109s] [ 97%] 2024-08-07T18:08:37.0235546Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 97%] 2024-08-07T18:08:37.0236811Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0102s] [ 97%] 2024-08-07T18:08:37.0238028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0102s] [ 97%] 2024-08-07T18:08:37.0239260Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0120s] [ 97%] 2024-08-07T18:08:37.0240494Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0120s] [ 97%] 2024-08-07T18:08:37.0241753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0115s] [ 97%] 2024-08-07T18:08:37.0242973Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0112s] [ 97%] 2024-08-07T18:08:37.0244205Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0110s] [ 97%] 2024-08-07T18:08:37.0245479Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0112s] [ 97%] 2024-08-07T18:08:37.0246767Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0073s] [ 97%] 2024-08-07T18:08:37.0247981Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0073s] [ 97%] 2024-08-07T18:08:37.0249193Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0125s] [ 97%] 2024-08-07T18:08:37.0250438Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0125s] [ 97%] 2024-08-07T18:08:37.0251687Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0080s] [ 97%] 2024-08-07T18:08:37.0252929Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0081s] [ 97%] 2024-08-07T18:08:37.0254155Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0079s] [ 97%] 2024-08-07T18:08:37.0255420Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0078s] [ 97%] 2024-08-07T18:08:37.0256678Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0070s] [ 97%] 2024-08-07T18:08:37.0257908Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 97%] 2024-08-07T18:08:37.0259123Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 97%] 2024-08-07T18:08:37.0260352Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0088s] [ 97%] 2024-08-07T18:08:37.0261602Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0089s] [ 97%] 2024-08-07T18:08:37.0262813Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_512_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0093s] [ 97%] 2024-08-07T18:08:37.0264087Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0079s] [ 97%] 2024-08-07T18:08:37.0265353Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0072s] [ 97%] 2024-08-07T18:08:37.0266581Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0074s] [ 97%] 2024-08-07T18:08:37.0267793Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 97%] 2024-08-07T18:08:37.0269028Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0081s] [ 97%] 2024-08-07T18:08:37.0270324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0091s] [ 97%] 2024-08-07T18:08:37.0271558Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0081s] [ 97%] 2024-08-07T18:08:37.0272797Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0075s] [ 97%] 2024-08-07T18:08:37.0274050Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0076s] [ 97%] 2024-08-07T18:08:37.0275347Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0077s] [ 97%] 2024-08-07T18:08:37.0276542Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0067s] [ 97%] 2024-08-07T18:08:37.0277777Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 97%] 2024-08-07T18:08:37.0278993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0085s] [ 97%] 2024-08-07T18:08:37.0280224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0081s] [ 97%] 2024-08-07T18:08:37.0281453Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0093s] [ 97%] 2024-08-07T18:08:37.0282720Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0092s] [ 98%] 2024-08-07T18:08:37.0284006Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0080s] [ 98%] 2024-08-07T18:08:37.0285222Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0079s] [ 98%] 2024-08-07T18:08:37.0286446Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0075s] [ 98%] 2024-08-07T18:08:37.0287669Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0089s] [ 98%] 2024-08-07T18:08:37.0288905Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0091s] [ 98%] 2024-08-07T18:08:37.0290128Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0090s] [ 98%] 2024-08-07T18:08:37.0291376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0088s] [ 98%] 2024-08-07T18:08:37.0292647Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0091s] [ 98%] 2024-08-07T18:08:37.0293914Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 98%] 2024-08-07T18:08:37.0295574Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0063s] [ 98%] 2024-08-07T18:08:37.0296809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0063s] [ 98%] 2024-08-07T18:08:37.0298046Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0062s] [ 98%] 2024-08-07T18:08:37.0299262Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 98%] 2024-08-07T18:08:37.0300492Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0074s] [ 98%] 2024-08-07T18:08:37.0301804Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 98%] 2024-08-07T18:08:37.0303115Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 98%] 2024-08-07T18:08:37.0304321Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 98%] 2024-08-07T18:08:37.0305531Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 98%] 2024-08-07T18:08:37.0306765Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0061s] [ 98%] 2024-08-07T18:08:37.0307993Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 98%] 2024-08-07T18:08:37.0309224Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 98%] 2024-08-07T18:08:37.0310441Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0071s] [ 98%] 2024-08-07T18:08:37.0311763Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 98%] 2024-08-07T18:08:37.0313082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 98%] 2024-08-07T18:08:37.0314304Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0063s] [ 98%] 2024-08-07T18:08:37.0315514Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 98%] 2024-08-07T18:08:37.0316766Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 98%] 2024-08-07T18:08:37.0318014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 98%] 2024-08-07T18:08:37.0319229Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0074s] [ 98%] 2024-08-07T18:08:37.0320509Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0073s] [ 98%] 2024-08-07T18:08:37.0321772Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0076s] [ 98%] 2024-08-07T18:08:37.0323014Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0071s] [ 98%] 2024-08-07T18:08:37.0324220Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 98%] 2024-08-07T18:08:37.0325458Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 98%] 2024-08-07T18:08:37.0326703Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 98%] 2024-08-07T18:08:37.0327918Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 98%] 2024-08-07T18:08:37.0329146Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 98%] 2024-08-07T18:08:37.0330411Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 98%] 2024-08-07T18:08:37.0331701Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 98%] 2024-08-07T18:08:37.0332936Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 98%] 2024-08-07T18:08:37.0334150Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0062s] [ 98%] 2024-08-07T18:08:37.0335362Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0062s] [ 98%] 2024-08-07T18:08:37.0336586Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 98%] 2024-08-07T18:08:37.0337790Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 98%] 2024-08-07T18:08:37.0338992Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 98%] 2024-08-07T18:08:37.0340333Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 98%] 2024-08-07T18:08:37.0341595Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0069s] [ 98%] 2024-08-07T18:08:37.0342867Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_64_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 98%] 2024-08-07T18:08:37.0344082Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 98%] 2024-08-07T18:08:37.0345324Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 98%] 2024-08-07T18:08:37.0346534Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 98%] 2024-08-07T18:08:37.0347760Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0059s] [ 98%] 2024-08-07T18:08:37.0349016Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 98%] 2024-08-07T18:08:37.0350285Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0067s] [ 98%] 2024-08-07T18:08:37.0351517Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 98%] 2024-08-07T18:08:37.0352753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0066s] [ 98%] 2024-08-07T18:08:37.0353982Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0064s] [ 98%] 2024-08-07T18:08:37.0355195Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0061s] [ 98%] 2024-08-07T18:08:37.0356412Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0062s] [ 98%] 2024-08-07T18:08:37.0357612Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0061s] [ 98%] 2024-08-07T18:08:37.0358880Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0072s] [ 98%] 2024-08-07T18:08:37.0360156Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0070s] [ 98%] 2024-08-07T18:08:37.0361351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0070s] [ 98%] 2024-08-07T18:08:37.0362608Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_16_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0068s] [ 98%] 2024-08-07T18:08:37.0363825Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 98%] 2024-08-07T18:08:37.0365066Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0058s] [ 98%] 2024-08-07T18:08:37.0366270Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0055s] [ 98%] 2024-08-07T18:08:37.0367556Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 98%] 2024-08-07T18:08:37.0368822Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 98%] 2024-08-07T18:08:37.0370060Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 98%] 2024-08-07T18:08:37.0371266Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 98%] 2024-08-07T18:08:37.0372518Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0070s] [ 98%] 2024-08-07T18:08:37.0373753Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 98%] 2024-08-07T18:08:37.0374958Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0060s] [ 98%] 2024-08-07T18:08:37.0376175Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 98%] 2024-08-07T18:08:37.0377428Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0058s] [ 98%] 2024-08-07T18:08:37.0378708Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0067s] [ 99%] 2024-08-07T18:08:37.0379915Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0069s] [ 99%] 2024-08-07T18:08:37.0381127Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 99%] 2024-08-07T18:08:37.0382371Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_32_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0069s] [ 99%] 2024-08-07T18:08:37.0383589Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0059s] [ 99%] 2024-08-07T18:08:37.0384816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0056s] [ 99%] 2024-08-07T18:08:37.0386020Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 99%] 2024-08-07T18:08:37.0387310Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0055s] [ 99%] 2024-08-07T18:08:37.0388571Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0066s] [ 99%] 2024-08-07T18:08:37.0389807Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0065s] [ 99%] 2024-08-07T18:08:37.0391022Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0066s] [ 99%] 2024-08-07T18:08:37.0392259Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0065s] [ 99%] 2024-08-07T18:08:37.0393480Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0060s] [ 99%] 2024-08-07T18:08:37.0394685Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 99%] 2024-08-07T18:08:37.0396423Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0059s] [ 99%] 2024-08-07T18:08:37.0397740Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 99%] 2024-08-07T18:08:37.0398964Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0069s] [ 99%] 2024-08-07T18:08:37.0400171Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 99%] 2024-08-07T18:08:37.0401399Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0067s] [ 99%] 2024-08-07T18:08:37.0402621Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_64_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 99%] 2024-08-07T18:08:37.0403827Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0058s] [ 99%] 2024-08-07T18:08:37.0405051Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0057s] [ 99%] 2024-08-07T18:08:37.0406312Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0058s] [ 99%] 2024-08-07T18:08:37.0407605Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0054s] [ 99%] 2024-08-07T18:08:37.0408816Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0065s] [ 99%] 2024-08-07T18:08:37.0410053Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0066s] [ 99%] 2024-08-07T18:08:37.0411268Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0064s] [ 99%] 2024-08-07T18:08:37.0412511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_False_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0063s] [ 99%] 2024-08-07T18:08:37.0413719Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale0_cuda_float16 PASSED [0.0061s] [ 99%] 2024-08-07T18:08:37.0414966Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float16_scale_l1_cuda_float16 PASSED [0.0059s] [ 99%] 2024-08-07T18:08:37.0416273Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale0_cuda_float32 PASSED [0.0060s] [ 99%] 2024-08-07T18:08:37.0417483Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_0_float32_scale_l1_cuda_float32 PASSED [0.0057s] [ 99%] 2024-08-07T18:08:37.0418705Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale0_cuda_float16 PASSED [0.0068s] [ 99%] 2024-08-07T18:08:37.0419919Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float16_scale_l1_cuda_float16 PASSED [0.0068s] [ 99%] 2024-08-07T18:08:37.0421144Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale0_cuda_float32 PASSED [0.0068s] [ 99%] 2024-08-07T18:08:37.0422354Z test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_vs_math_ref_grads_batch_size_8_seq_len_q_8_seq_len_k_8_head_dim_8_is_causal_True_dropout_p_0_22_float32_scale_l1_cuda_float32 PASSED [0.0067s] [ 99%] 2024-08-07T18:08:37.0423351Z test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_accuracy_type_dense_fused_kernel0_cuda PASSED [0.0293s] [ 99%] 2024-08-07T18:08:37.0424376Z test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_accuracy_type_nested_fused_kernel0_cuda PASSED [0.0494s] [ 99%] 2024-08-07T18:08:37.0425366Z test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_type_dense_is_contiguous_False_cuda PASSED [0.0091s] [ 99%] 2024-08-07T18:08:37.0426316Z test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_type_dense_is_contiguous_True_cuda PASSED [0.0087s] [ 99%] 2024-08-07T18:08:37.0427253Z test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_type_nested_is_contiguous_False_cuda PASSED [0.0353s] [ 99%] 2024-08-07T18:08:37.0428207Z test_transformers.py::TestSDPACudaOnlyCUDA::test_scaled_dot_product_attention_fused_kernels_packed_type_nested_is_contiguous_True_cuda PASSED [0.0330s] [ 99%] 2024-08-07T18:08:37.0428952Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_choice_with_determinism_warn_only_False_cuda PASSED [0.0020s] [ 99%] 2024-08-07T18:08:37.0429690Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_choice_with_determinism_warn_only_True_cuda PASSED [0.0016s] [ 99%] 2024-08-07T18:08:37.0430986Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_False_is_causal_False_bfloat16_cuda_bfloat16 SKIPPED [0.0004s] (Flash Attention was not built for this system) [ 99%] 2024-08-07T18:08:37.0432239Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_False_is_causal_False_float16_cuda_float16 SKIPPED [0.0003s] (Flash Attention was not built for this system) [ 99%] 2024-08-07T18:08:37.0433543Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_False_is_causal_True_bfloat16_cuda_bfloat16 SKIPPED [0.0003s] (Flash Attention was not built for this system) [ 99%] 2024-08-07T18:08:37.0434830Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_False_is_causal_True_float16_cuda_float16 SKIPPED [0.0004s] (Flash Attention was not built for this system) [ 99%] 2024-08-07T18:08:37.0436162Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_True_is_causal_False_bfloat16_cuda_bfloat16 SKIPPED [0.0002s] (Flash Attention was not built for this system) [ 99%] 2024-08-07T18:08:37.0437394Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_True_is_causal_False_float16_cuda_float16 SKIPPED [0.0002s] (Flash Attention was not built for this system) [ 99%] 2024-08-07T18:08:37.0438655Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_True_is_causal_True_bfloat16_cuda_bfloat16 SKIPPED [0.0003s] (Flash Attention was not built for this system) [ 99%] 2024-08-07T18:08:37.0439892Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_flash_attention_grad_against_math_contiguous_inputs_True_is_causal_True_float16_cuda_float16 SKIPPED [0.0003s] (Flash Attention was not built for this system) [ 99%] 2024-08-07T18:08:37.0440809Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_mem_efficient_grad_against_math_contiguous_inputs_False_is_causal_False_cuda PASSED [0.0046s] [ 99%] 2024-08-07T18:08:37.0441726Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_mem_efficient_grad_against_math_contiguous_inputs_False_is_causal_True_cuda PASSED [0.0039s] [ 99%] 2024-08-07T18:08:37.0442624Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_mem_efficient_grad_against_math_contiguous_inputs_True_is_causal_False_cuda PASSED [0.0037s] [ 99%] 2024-08-07T18:08:37.0443591Z test_transformers.py::TestSDPACudaOnlyCUDA::test_sdp_mem_efficient_grad_against_math_contiguous_inputs_True_is_causal_True_cuda PASSED [0.0039s] [ 99%] 2024-08-07T18:08:37.0444511Z test_transformers.py::TestSDPACudaOnlyCUDA::test_singelton_head_dim_stride_ne_1_cuda SKIPPED [0.0003s] (Fused SDPA was not built for this system) [ 99%] 2024-08-07T18:08:37.0445353Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_LOWER_RIGHT_shape0_cuda PASSED [0.0504s] [ 99%] 2024-08-07T18:08:37.0446197Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_LOWER_RIGHT_shape1_cuda PASSED [0.1024s] [ 99%] 2024-08-07T18:08:37.0447467Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_LOWER_RIGHT_shape2_cuda SKIPPED [0.0017s] (Lower right causal mask will produce NaNs in the output when seq_len_q > seq_len_kv!) [ 99%] 2024-08-07T18:08:37.0448319Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_LOWER_RIGHT_shape3_cuda PASSED [0.0048s] [ 99%] 2024-08-07T18:08:37.0449149Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_UPPER_LEFT_shape0_cuda PASSED [0.0497s] [ 99%] 2024-08-07T18:08:37.0449968Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_UPPER_LEFT_shape1_cuda PASSED [0.0792s] [ 99%] 2024-08-07T18:08:37.0450807Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_UPPER_LEFT_shape2_cuda PASSED [0.1002s] [ 99%] 2024-08-07T18:08:37.0451628Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_causal_variant_CausalVariant_UPPER_LEFT_shape3_cuda PASSED [0.0049s] [ 99%] 2024-08-07T18:08:37.0452516Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_LOWER_RIGHT_shape0_cuda PASSED [0.8267s] [ 99%] 2024-08-07T18:08:37.0453435Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_LOWER_RIGHT_shape1_cuda PASSED [0.2263s] [ 99%] 2024-08-07T18:08:37.0454889Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_LOWER_RIGHT_shape2_cuda SKIPPED [0.0039s] (Lower right causal mask will produce NaNs in the output when seq_len_q > seq_len_kv!) [ 99%] 2024-08-07T18:08:37.0455760Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_LOWER_RIGHT_shape3_cuda PASSED [0.1509s] [ 99%] 2024-08-07T18:08:37.0456626Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_UPPER_LEFT_shape0_cuda PASSED [0.1309s] [ 99%] 2024-08-07T18:08:37.0457508Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_UPPER_LEFT_shape1_cuda PASSED [0.1578s] [ 99%] 2024-08-07T18:08:37.0458376Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_UPPER_LEFT_shape2_cuda PASSED [0.1813s] [ 99%] 2024-08-07T18:08:37.0459263Z test_transformers.py::TestAttnBiasCUDA::test_causal_variants_compile_causal_variant_CausalVariant_UPPER_LEFT_shape3_cuda PASSED [0.0942s] [ 99%] 2024-08-07T18:08:37.0459870Z test_transformers.py::TestAttnBiasCUDA::test_is_causal_and_mask_fails_cuda PASSED [0.0022s] [ 99%] 2024-08-07T18:08:37.0460533Z test_transformers.py::TestAttnBiasCUDA::test_is_causal_equals_upper_left_shape0_cuda PASSED [0.0109s] [ 99%] 2024-08-07T18:08:37.0461218Z test_transformers.py::TestAttnBiasCUDA::test_is_causal_equals_upper_left_shape1_cuda PASSED [0.0119s] [ 99%] 2024-08-07T18:08:37.0461863Z test_transformers.py::TestAttnBiasCUDA::test_is_causal_equals_upper_left_shape2_cuda PASSED [0.0226s] [ 99%] 2024-08-07T18:08:37.0462586Z test_transformers.py::TestAttnBiasCUDA::test_is_causal_equals_upper_left_shape3_cuda PASSED [0.0023s] [100%] 2024-08-07T18:08:37.0462648Z 2024-08-07T18:08:37.0463332Z - generated xml file: /var/lib/jenkins/workspace/test/test-reports/python-pytest/test_transformers/test_transformers-68dbd8fab867c5cc.xml - 2024-08-07T18:08:37.0463942Z ======== 7729 passed, 11 skipped, 37604 deselected in 122.91s (0:02:02) ======== 2024-08-07T18:08:37.0465186Z The following tests failed and then succeeded when run in a new process['test/test_transformers.py::TestSDPACudaOnlyCUDA::test_mem_efficient_attention_attn_mask_vs_math_ref_grads_batch_size_1_seq_len_q_128_seq_len_k_256_head_dim_64_is_causal_False_dropout_p_0_0_float16_scale_l1_cuda_float16'] 2024-08-07T18:08:37.0465207Z 2024-08-07T18:08:37.0465676Z FINISHED PRINTING LOG FILE of test_transformers 1/1 (test/test-reports/test_transformers_1.1_2ac14b314d452749_.log) 2024-08-07T18:08:37.0465694Z 2024-08-07T18:08:37.0465961Z Running functorch/test_ops 7/9 ... [2024-08-07 18:08:25.554688] 2024-08-07T18:08:37.0466979Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'functorch/test_ops.py', '-m', 'not serial', '--shard-id=7', '--num-shards=9', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:08:25.555279] 2024-08-07T18:13:17.3630496Z 2024-08-07T18:13:17.3634752Z functorch/test_ops 2/9 was successful, full logs can be found in artifacts with path test/test-reports/functorch.test_ops_2.9_0da5ccb26741bd7a_.log 2024-08-07T18:13:17.4160770Z Running 1137 items in this shard: test/functorch/test_ops.py::TestOperatorsCUDA::test_extremal_numerics_cross_entropy_cuda, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_CubeGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_MulGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_SelectAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_SortGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_T_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad___getitem___functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad___rmod___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad__batch_norm_with_update_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_addr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_argsort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_as_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_asin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_asinh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_atan_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_atleast_2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_atleast_3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_broadcast_tensors_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_combinations_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_count_nonzero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_cross_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_diagonal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_diagonal_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_dot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_expand_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_fft_ihfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_fft_irfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_fft_rfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_flip_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_heaviside_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_int_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_isnan_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_isposinf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_kthvalue_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linalg_cond_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linalg_eig_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linalg_eigvalsh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linalg_lu_factor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linalg_matrix_power_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linalg_solve_triangular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linalg_vander_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_log_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_log_normal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_log_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_logaddexp2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_logcumsumexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_logit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_lu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_mT_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_masked_argmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_masked_cumprod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_masked_log_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_mul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_mvlgamma_mvlgamma_p_1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_new_empty_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_new_ones_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_adaptive_max_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_alpha_dropout_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_conv2d_stride_depthwise_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_conv2d_stride_groups_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_conv2d_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_dropout_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_grid_sample_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_group_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_interpolate_nearest-exact_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_interpolate_nearest_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_kl_div_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_leaky_relu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_max_unpool1d_grad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_mse_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_relu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_unfold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_norm_fro_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_normal_number_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_pca_lowrank_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_pinverse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_qr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_rad2deg_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_randn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_round_decimals_0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_rsub_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_scatter_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_scatter_reduce_sum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_signal_windows_general_cosine_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_signal_windows_kaiser_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_signal_windows_nuttall_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_sinh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_slice_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_special_chebyshev_polynomial_t_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_special_legendre_polynomial_p_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_special_scaled_modified_bessel_k0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_special_shifted_chebyshev_polynomial_u_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_special_xlog1py_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_squeeze_multiple_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_std_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_stft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_tile_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_to_sparse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_trapz_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_true_divide_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_unique_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_var_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_NumpyCubeAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_ZeroGradientsGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp___radd___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp___rmatmul___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp___rpow___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_addcmul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_angle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_atan2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_baddbmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_bucketize_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_byte_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_cauchy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_cfloat_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_cholesky_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_cross_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_diag_embed_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_eq_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_exp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_eye_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_fft_ifftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_fft_ihfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_fft_rfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_floor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_fmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_geometric_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_half_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_hsplit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_index_put_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_index_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_int_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_isfinite_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_linalg_cond_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_linalg_slogdet_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_logaddexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_lu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_masked_logsumexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_masked_median_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_masked_normalize_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_max_binary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_max_reduction_with_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_mv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_new_zeros_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_conv1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_conv2d_stride_padding_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_conv2d_strided_padding_dilation_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_embedding_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_embedding_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_feature_alpha_dropout_without_train_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_gaussian_nll_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_interpolate_bilinear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_leaky_relu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_pixel_shuffle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_softshrink_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_tanhshrink_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_triplet_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_upsample_nearest_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_norm_fro_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_permute_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_polygamma_polygamma_n_0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_pow_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_randn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_reciprocal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_renorm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_select_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_signal_windows_general_cosine_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_signal_windows_kaiser_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_special_airy_ai_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_trapz_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_triangular_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_unfold_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_unsafe_split_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_var_mean_unbiased_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_NumpyCubeAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_NumpyExpMarkDirtyAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp___radd___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp___rsub___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp__unsafe_masked_index_put_accumulate_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_abs_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_addcmul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_arange_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_atan_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_diagflat_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_fft_fftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_fft_fftshift_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_fft_rfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_frac_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_frexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_index_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_isfinite_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_isin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_jiterator_binary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_cholesky_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_diagonal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_matrix_rank_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_pinv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_slogdet_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_svdvals_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_logaddexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_masked_argmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_masked_logaddexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_masked_median_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_mvlgamma_mvlgamma_p_5_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nan_to_num_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nanquantile_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_ne_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_avg_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_conv2d_stride_depthwise_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_conv_transpose1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_cosine_similarity_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_cross_entropy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_elu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_feature_alpha_dropout_without_train_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_interpolate_bicubic_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_interpolate_linear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_layer_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_max_unpool2d_grad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_poisson_nll_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_rms_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_softplus_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_softsign_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_ops_aten__new_zeros_with_same_feature_meta_functorchonly_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_ormqr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_reciprocal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_roll_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_rot90_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_round_decimals_3_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_signal_windows_bartlett_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_signal_windows_gaussian_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_signal_windows_general_cosine_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_special_chebyshev_polynomial_t_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_special_i1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_special_legendre_polynomial_p_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_special_scaled_modified_bessel_k0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_special_scaled_modified_bessel_k1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_sum_to_size_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_to_sparse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_trapz_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_unfold_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_unsqueeze_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_var_mean_unbiased_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjpvmap_CubeGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjpvmap_ForwardHasDefaultArgsAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjpvmap_MulGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjpvmap_NumpyExpMarkDirtyAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvmap_ForwardHasDefaultArgsAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_diagonal_grad_op_jvp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_movedim_grad_op_jvp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_movedim_grad_op_vjp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_real_grad_op_jvp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_special_grad_op_jvp_cuda, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_transpose_grad_op_jvp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_view_as_complex_grad_op_vjp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_H_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_NumpyTakeAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp__unsafe_masked_index_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_addcdiv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_alias_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_any_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_atan2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_atleast_3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_bfloat16_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_byte_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_cauchy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_cholesky_inverse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_chunk_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_copysign_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_cross_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_deg2rad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_double_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_einsum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_erfc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_exponential_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_fft_ifft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_fft_irfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_flip_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_fmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_index_put_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_jiterator_binary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_ldexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_lgamma_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_cholesky_ex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_cross_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_lstsq_grad_oriented_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_lu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_lu_factor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_lu_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_norm_subgradients_at_zero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_tensorinv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_log_softmax_with_dtype_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_logical_or_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_logspace_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_lt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_lu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_masked_argmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_masked_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_masked_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_max_pool2d_with_indices_backward_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_max_reduction_with_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_maximum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_adaptive_avg_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_avg_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_conv1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_conv2d_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_conv_transpose2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_cross_entropy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_dropout_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_feature_alpha_dropout_without_train_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_fractional_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_fractional_max_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_gaussian_nll_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_glu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_group_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_leaky_relu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_max_unpool3d_grad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_soft_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_threshold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_polar_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_pow_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_rad2deg_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_remainder_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_resolve_neg_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_round_decimals_3_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_rsqrt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_signal_windows_gaussian_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_special_chebyshev_polynomial_w_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_special_hermite_polynomial_h_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_special_spherical_bessel_j0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_stack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_std_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_std_unbiased_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_sub_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_to_sparse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_unique_consecutive_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_unsafe_chunk_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp___getitem___functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_addr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_atanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_bool_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_broadcast_tensors_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_contiguous_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_cos_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_count_nonzero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_cov_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_cross_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_cumsum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_diag_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_dist_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_dstack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_erfinv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_exponential_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_fft_hfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_fft_ifftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_flip_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_fliplr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_float_power_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_floor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_histc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_index_select_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_int_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_isinf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_kron_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_kthvalue_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_cholesky_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_ldl_factor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_ldl_factor_ex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_lu_factor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_qr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_svdvals_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linspace_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_logaddexp2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_logical_and_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_masked_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_masked_cumprod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_masked_fill_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_masked_fill_functorch_Scalar_only_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_masked_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_max_reduction_no_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_msort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_mvlgamma_mvlgamma_p_3_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_native_layer_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_conv1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_conv2d_strided_padding_dilation_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_dropout_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_feature_alpha_dropout_without_train_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_threshold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_unfold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_permute_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_rad2deg_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_reciprocal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_resize__cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_roll_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_scatter_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_signal_windows_general_hamming_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_sort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_sparse_sampled_addmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_special_entr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_split_with_sizes_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_std_mean_unbiased_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_sum_to_size_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_svd_lowrank_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_to_sparse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_true_divide_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_unbind_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_unflatten_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_unsafe_chunk_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_view_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_zeros_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjpvmap_MulGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_ForwardHasDefaultArgsAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap___getitem___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap__segment_reduce_lengths_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap__upsample_bilinear2d_aa_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_acos_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_argmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_as_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_broadcast_tensors_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_cat_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_cdist_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_clone_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_column_stack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_combinations_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_contiguous_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_corrcoef_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_empty_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_erfinv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_fft_fftshift_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_fft_ifftshift_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_fft_ihfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_fft_irfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_frac_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_frexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_isreal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_jiterator_binary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_linalg_eig_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_linalg_tensorsolve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_log1p_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_logaddexp2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_masked_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_masked_cumprod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_masked_normalize_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_matrix_exp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_meshgrid_variadic_tensors_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_mm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_mvlgamma_mvlgamma_p_1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_mvlgamma_mvlgamma_p_3_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_new_zeros_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_adaptive_max_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_conv3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_embedding_bag_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_feature_alpha_dropout_with_train_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_fractional_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_glu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_interpolate_area_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_interpolate_bilinear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_mish_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_mse_loss_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_multi_head_attention_forward_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_multi_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_normalize_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_pixel_shuffle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_prelu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_relu6_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_silu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nonzero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_norm_inf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_normal_in_place_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_outer_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_positive_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_roll_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_rsub_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_scatter_reduce_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_searchsorted_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_sigmoid_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_signal_windows_bartlett_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_sinc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_sinh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_sort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_special_chebyshev_polynomial_v_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_special_i1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_special_log_ndtr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_special_scaled_modified_bessel_k0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_special_spherical_bessel_j0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_split_with_sizes_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_stack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_svd_lowrank_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_tan_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_unfold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_unique_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_unsafe_chunk_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_vdot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_view_as_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmapvmap_MulGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_MulGenVmapAutogradFunction_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_NumpyTakeAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_addmv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_addr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_alias_copy_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_any_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_argsort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_as_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_as_strided_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_asinh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_baddbmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_bfloat16_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_bfloat16_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_bmm_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_cfloat_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_chalf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_chalf_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_cholesky_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_cholesky_inverse_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_chunk_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_clamp_max_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_clone_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_contiguous_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_count_nonzero_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_cumprod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_diag_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_diag_embed_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_digamma_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_dist_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_double_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_expand_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_hfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_ifft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_ifftshift_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_ihfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_ihfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_irfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_rfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fill_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_floor_divide_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fmod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_frexp_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_ge_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_gt_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_half_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_half_functorch_no_channels_last_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_histc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_index_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_index_fill_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_index_select_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_isclose_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_item_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_jiterator_unary_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_matrix_power_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_pinv_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_solve_triangular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_vander_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_log2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_log_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_log_softmax_with_dtype_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_logdet_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_logsumexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_lu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_lu_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_lu_unpack_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_mH_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_mT_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_masked_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_masked_amin_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_masked_fill_functorch_Scalar_only_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_masked_logaddexp_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_masked_normalize_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_masked_scatter_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_max_binary_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_mean_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_mode_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_mul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_mvlgamma_mvlgamma_p_5_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nansum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_neg_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_new_zeros_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_adaptive_avg_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_adaptive_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_avg_pool1d_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_avg_pool2d_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_channel_shuffle_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_conv2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_conv2d_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_conv2d_strided_padding_dilation_with_bias_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_conv2d_with_bias_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_conv3d_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_conv_transpose3d_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_cosine_similarity_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_ctc_loss_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_dropout3d_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_embedding_functorch_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_fractional_max_pool3d_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_interpolate_bilinear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_interpolate_nearest_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_l1_loss_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_leaky_relu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_leaky_relu_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_linear_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_max_unpool2d_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_max_unpool3d_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_mish_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_mse_loss_functorch_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_pad_circular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_pad_replicate_negative_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_pad_replicate_negative_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_pdist_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_prelu_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_soft_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_softplus_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_softsign_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_threshold_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_triplet_margin_loss_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_upsample_nearest_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nonzero_static_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_ones_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_pca_lowrank_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_prod_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_rand_like_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_reciprocal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_remainder_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_remainder_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_reshape_as_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_scatter_add_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_scatter_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_scatter_reduce_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_scatter_reduce_mean_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_select_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_select_scatter_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_sgn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_signal_windows_cosine_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_signal_windows_gaussian_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_sinc_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_slice_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_sort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_sparse_sampled_addmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_bessel_j0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_chebyshev_polynomial_u_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_chebyshev_polynomial_w_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_i1_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_laguerre_polynomial_l_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_legendre_polynomial_p_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_log_ndtr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_ndtr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_ndtr_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_scaled_modified_bessel_k1_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_shifted_chebyshev_polynomial_u_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_shifted_chebyshev_polynomial_v_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_split_list_args_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_split_with_sizes_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_split_with_sizes_copy_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_squeeze_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_squeeze_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_std_mean_unbiased_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_sum_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_svd_lowrank_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_t_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_t_copy_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_take_along_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_transpose_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_tril_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_unfold_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_uniform_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_unsafe_split_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_vsplit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_NumpyExpMarkDirtyAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_addmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_angle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_as_strided_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_bernoulli_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_bool_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_broadcast_tensors_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_copysign_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_digamma_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_double_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_expand_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_fft_fft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_fft_fftshift_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_fft_ifft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_fft_ihfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_fft_rfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_flip_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_float_power_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_fmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_fmod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_geqrf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_ForwardHasDefaultArgsAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_NumpyCubeNotComposableAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_SortGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule__batch_norm_with_update_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_acos_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_addr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_argmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_atan_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_atleast_3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_bernoulli_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_cartesian_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_char_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_cholesky_inverse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_cos_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_deg2rad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_diagonal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_div_trunc_rounding_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_empty_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_equal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_exp2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_fft_hfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_fft_hfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_fft_ifftshift_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_fft_ihfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_grid_sampler_2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_isfinite_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_kron_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_ldexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_le_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_linalg_eigh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_linalg_lu_factor_ex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_linalg_matrix_power_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_linalg_matrix_rank_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_linalg_pinv_singular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_linalg_qr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_log_normal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_logical_xor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_masked_fill_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_masked_std_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_masked_sum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_matmul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_maximum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_minimum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_mul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nanmedian_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_native_layer_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_adaptive_max_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_binary_cross_entropy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_conv2d_stride_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_cosine_embedding_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_cosine_similarity_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_cross_entropy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_ctc_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_dropout_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_feature_alpha_dropout_with_train_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_gaussian_nll_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_interpolate_nearest_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_max_unpool1d_grad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_multi_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_scaled_dot_product_attention_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_selu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_polygamma_polygamma_n_4_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_ravel_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_reciprocal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_scatter_reduce_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_short_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_signal_windows_gaussian_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_signal_windows_hamming_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_special_bessel_j1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_special_chebyshev_polynomial_w_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_special_hermite_polynomial_he_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_special_i1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_special_zeta_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_split_with_sizes_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_std_mean_unbiased_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_sub_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_t_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_tensordot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_true_divide_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_unfold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_unique_consecutive_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_view_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_vsplit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_hsplit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_hypot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_isfinite_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_kron_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_linalg_lu_factor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_linalg_multi_dot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_linalg_pinv_singular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_log_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_mH_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_masked_cumprod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_masked_log_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_matmul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_mm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_native_dropout_backward_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_ne_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_new_empty_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_new_empty_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_batch_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_channel_shuffle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_conv2d_stride_padding_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_cross_entropy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_glu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_local_response_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_max_unpool1d_grad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_multi_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_pad_constant_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_pad_replicate_negative_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_pairwise_distance_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_pixel_shuffle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_selu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_softshrink_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_threshold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nonzero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_qr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_quantile_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_slice_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_special_bessel_j0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_special_i0e_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_special_shifted_chebyshev_polynomial_w_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_squeeze_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_sum_to_size_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_svd_lowrank_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_t_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_transpose_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_unflatten_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_unfold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_unsafe_split_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_view_as_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_view_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_SelectAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_SelectGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp___rdiv___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp__chunk_cat_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_addbmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_addr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_allclose_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_angle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_argsort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_asinh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_bfloat16_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_bool_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_broadcast_tensors_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_clamp_max_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_constant_pad_nd_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_corrcoef_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_cross_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_deg2rad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_dsplit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_fft_hfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_index_select_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_isfinite_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_isinf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_jiterator_unary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_eigvalsh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_ldl_factor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_matrix_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_pinv_singular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_solve_triangular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_svd_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_log2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_log_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_logsumexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_long_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_lu_unpack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_mT_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_masked_fill_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_masked_logaddexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_masked_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_masked_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_msort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nansum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_narrow_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_native_layer_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_adaptive_avg_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_adaptive_max_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_celu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_channel_shuffle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_conv2d_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_conv2d_strided_padding_dilation_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_conv2d_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_elu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_embedding_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_interpolate_linear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_max_unpool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_mse_loss_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_multilabel_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_pad_circular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_pdist_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nonzero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_pca_lowrank_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_scatter_reduce_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_select_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_signal_windows_bartlett_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_signal_windows_general_hamming_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_signbit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_sparse_sampled_addmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_split_with_sizes_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_t_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_tensordot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_trapz_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_view_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_vstack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvmap_CubeGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_NumpyExpMarkDirtyAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp___rdiv___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp___rmul___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp__native_batch_norm_legit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp__unsafe_masked_index_put_accumulate_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp__upsample_bilinear2d_aa_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_addmm_decomposed_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_addmv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_angle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_arange_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_argwhere_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_bmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_diagonal_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_fft_fft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_fft_ihfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_fft_irfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_fft_rfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_fliplr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_float_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_full_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_half_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_CubeGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_NumpySortAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_SelectAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_SelectGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule___getitem___functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule___rmatmul___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule___rmod___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule__unsafe_masked_index_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule__unsafe_masked_index_put_accumulate_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_addcmul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_as_strided_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_atleast_2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_bernoulli_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_bool_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_cdouble_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_ceil_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_cholesky_inverse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_clamp_max_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_cummin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_cumulative_trapezoid_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_deg2rad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_diagonal_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_double_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_exponential_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_fft_fft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_fft_ifftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_fft_ihfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_float_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_fmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_gather_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_heaviside_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_isin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_ldexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_linalg_det_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_linalg_diagonal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_linalg_ldl_factor_ex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_linalg_matrix_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_linalg_matrix_rank_hermitian_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_linalg_pinv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_linalg_svd_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_logical_or_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_lt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_mT_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_masked_argmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_masked_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_masked_select_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_masked_var_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_max_reduction_with_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_min_reduction_no_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_msort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_mvlgamma_mvlgamma_p_3_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_new_ones_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_avg_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_celu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_conv2d_stride_padding_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_fractional_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_max_unpool1d_grad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_max_unpool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_pad_reflect_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_pad_replicate_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_pad_replicate_negative_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_selu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_softmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_upsample_nearest_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_rad2deg_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_randint_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_reciprocal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_signal_windows_general_cosine_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_signal_windows_hamming_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_signal_windows_nuttall_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_special_hermite_polynomial_h_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_special_shifted_chebyshev_polynomial_w_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_split_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_to_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_trunc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_unsafe_split_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_var_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_where_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_histc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_index_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_jiterator_4inputs_with_extra_args_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_jiterator_unary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_kron_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_linspace_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_log_normal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_logaddexp2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_lu_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_maximum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_mvlgamma_mvlgamma_p_1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_new_empty_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_new_zeros_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_adaptive_avg_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_adaptive_max_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_alpha_dropout_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_avg_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_bilinear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_conv2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_conv2d_stride_groups_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_conv_transpose1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_ctc_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_dropout3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_embedding_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_grid_sample_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_hinge_embedding_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_huber_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_interpolate_nearest-exact_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_leaky_relu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_multilabel_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_pixel_shuffle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_pixel_unshuffle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_prelu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_rrelu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_softplus_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_upsample_nearest_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nonzero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nonzero_static_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_ops_aten_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_resolve_conj_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_round_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_round_decimals_neg_3_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_scatter_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_scatter_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_sign_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_signal_windows_hamming_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_special_bessel_j1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_special_bessel_y0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_special_erfcx_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_special_modified_bessel_k0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_special_modified_bessel_k1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_special_scaled_modified_bessel_k1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_take_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_tensordot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_SelectAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_T_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp___getitem___functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_abs_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_addcdiv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_atanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_bfloat16_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_clone_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_diag_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_double_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_dstack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_eq_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_expand_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_fft_ifftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_gather_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_geometric_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_hstack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_index_fill_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_isclose_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_jiterator_unary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_linalg_cond_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_linalg_cross_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_linalg_det_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_linalg_ldl_factor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_linalg_pinv_singular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_linalg_svd_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_linalg_vector_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_log1p_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_log_normal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_logical_xor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_mH_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_masked_logaddexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_matrix_exp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_min_binary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_min_reduction_with_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nan_to_num_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nanquantile_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_native_dropout_backward_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_native_layer_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_batch_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_conv3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_cosine_similarity_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_feature_alpha_dropout_with_train_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_hardtanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_interpolate_bicubic_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_interpolate_linear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_pad_constant_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_pad_reflect_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_pixel_shuffle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_polygamma_polygamma_n_2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_positive_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_pow_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_rad2deg_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_remainder_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_reshape_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_scatter_reduce_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_special_log_ndtr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_split_list_args_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_squeeze_multiple_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_take_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_trace_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_unsqueeze_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_view_as_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_vstack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_zero__cuda_float32 2024-08-07T18:13:17.4657053Z 2024-08-07T18:13:21.2568600Z Running test_ops 2/11 ... [2024-08-07 18:13:21.256302] 2024-08-07T18:13:21.2572685Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops.py', '-m', 'not serial', '--shard-id=2', '--num-shards=11', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:13:21.256828] 2024-08-07T18:18:05.4223830Z 2024-08-07T18:18:05.4227312Z functorch/test_ops 7/9 was successful, full logs can be found in artifacts with path test/test-reports/functorch.test_ops_7.9_f92badfde39bc759_.log 2024-08-07T18:18:05.4729101Z Running 1115 items in this shard: test/functorch/test_ops.py::TestOperatorsCUDA::test_extremal_numerics_layer_norm_cuda, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_NumpyCubeAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad___rmatmul___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad___rpow___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_all_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_any_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_arange_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_atanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_baddbmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_bfloat16_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_bool_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_broadcast_shapes_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_broadcast_to_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_cholesky_inverse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_clamp_min_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_clone_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_conj_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_cosh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_cummax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_diagflat_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_div_no_rounding_mode_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_div_trunc_rounding_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_dstack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_expand_as_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_fft_hfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_fft_rfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_floor_divide_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_histc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_inner_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_isfinite_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_isreal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_jiterator_2inputs_2outputs_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linalg_lu_factor_ex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linalg_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linalg_norm_subgradients_at_zero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_linspace_tensor_overload_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_lu_unpack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_masked_logsumexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_matrix_exp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_max_pool2d_with_indices_backward_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_meshgrid_variadic_tensors_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_msort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_mv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nan_to_num_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_new_zeros_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_conv2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_cross_entropy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_ctc_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_gelu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_interpolate_area_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_mish_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_mse_loss_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_nll_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_pad_constant_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_pairwise_distance_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_soft_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_norm_inf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_ops_aten__new_zeros_with_same_feature_meta_functorchonly_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_polar_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_renorm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_reshape_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_rsqrt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_searchsorted_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_short_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_special_entr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_special_i0e_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_special_ndtr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_special_ndtri_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_std_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_tensor_split_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_tensordot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_unbind_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_unflatten_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_grad_unfold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_CubeGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_NumpyExpMarkDirtyAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_addmm_decomposed_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_argwhere_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_atleast_3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_bernoulli_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_bfloat16_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_bfloat16_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_chunk_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_contiguous_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_corrcoef_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_cumprod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_cumsum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_dist_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_empty_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_expand_as_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_exponential_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_fft_hfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_fft_hfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_fft_ifft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_frexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_geqrf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_igammac_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_index_reduce_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_index_select_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_isclose_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_jiterator_binary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_le_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_linalg_ldl_factor_ex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_linalg_qr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_linalg_solve_ex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_linalg_svd_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_linalg_tensorsolve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_log2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_log_softmax_with_dtype_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_long_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_lu_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_masked_argmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_masked_fill_functorch_Scalar_only_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_masked_log_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_masked_sum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_max_pool2d_with_indices_backward_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_min_binary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_min_reduction_with_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_minimum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_multinomial_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nansum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_adaptive_avg_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_adaptive_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_conv2d_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_dropout2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_embedding_bag_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_interpolate_area_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_interpolate_nearest-exact_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_interpolate_trilinear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_linear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_max_unpool1d_grad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_nn_functional_pad_replicate_negative_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_ones_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_outer_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_randint_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_scatter_reduce_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_scatter_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_scatter_reduce_sum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_short_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_short_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_signal_windows_hamming_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_slice_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_special_ndtri_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_split_with_sizes_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_std_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_stft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_take_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_tanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_tril_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvp_var_unbiased_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpjvpvmap_NumpySortAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_SortGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp___rmod___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp__unsafe_masked_index_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_addcdiv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_alias_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_argmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_atanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_atleast_1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_block_diag_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_broadcast_to_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_clamp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_combinations_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_conj_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_conj_physical_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_cos_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_cov_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_diag_embed_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_div_no_rounding_mode_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_double_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_einsum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_erf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_fmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_gradient_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_histc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_hsplit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_index_select_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_isclose_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_isinf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_isneginf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_jiterator_unary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_det_singular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_lu_factor_ex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_matrix_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_multi_dot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_solve_ex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_solve_triangular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_linalg_vector_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_logical_or_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_long_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_lt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_lu_unpack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_masked_argmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_masked_cumprod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_masked_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_matmul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_min_reduction_with_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_minimum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_narrow_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_neg_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_new_full_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nextafter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_adaptive_avg_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_adaptive_avg_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_conv2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_conv2d_stride_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_conv2d_stride_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_conv3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_conv_transpose3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_feature_alpha_dropout_with_train_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_interpolate_trilinear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_mse_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_pad_replicate_negative_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_prelu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_nn_functional_relu6_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_norm_inf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_norm_nuc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_rand_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_randint_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_remainder_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_repeat_interleave_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_round_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_scatter_reduce_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_signal_windows_blackman_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_signal_windows_hann_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_sin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_sinc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_special_airy_ai_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_special_chebyshev_polynomial_u_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_special_modified_bessel_k1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_special_spherical_bessel_j0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_square_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_trunc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_unbind_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_uniform_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_var_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_view_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_zeros_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvjp_zeros_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvmap_MulGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvmap_NumpyTakeAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvmapvmap_ForwardHasDefaultArgsAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvmapvmap_NumpyCubeAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_jvpvmapvmap_NumpyCubeNotComposableAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_expand_grad_op_vjp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_list_return_unbind_grad_op_vjp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_narrow_grad_op_jvp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_narrow_grad_op_vjp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_resolve_conj_grad_op_vjp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_select_grad_op_jvp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_unfold_grad_op_jvp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_view_then_inplace_unsqueeze_grad_op_vjp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_ZeroGradientsGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp___getitem___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp___rmul___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp__segment_reduce_offsets_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp__softmax_backward_data_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_as_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_as_strided_partial_views_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_asinh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_cdouble_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_clamp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_clamp_max_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_cov_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_double_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_exp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_expand_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_fft_irfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_fliplr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_fmod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_index_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_isnan_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_eigvals_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_inv_ex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_linalg_vander_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_long_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_masked_fill_functorch_Scalar_only_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_masked_log_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_masked_select_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_max_binary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_min_reduction_with_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_mode_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_narrow_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_adaptive_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_conv2d_stride_depthwise_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_dropout3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_interpolate_linear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_nll_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_relu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_nn_functional_triplet_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_norm_inf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_ormqr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_randint_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_real_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_repeat_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_resolve_conj_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_scalar_tensor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_select_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_signal_windows_exponential_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_signal_windows_nuttall_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_signbit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_sin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_sinh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_slice_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_special_laguerre_polynomial_l_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_special_legendre_polynomial_p_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_special_ndtr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_special_ndtri_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_special_shifted_chebyshev_polynomial_t_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_special_shifted_chebyshev_polynomial_v_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_special_xlog1py_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_split_list_args_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_std_mean_unbiased_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_svd_lowrank_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_tan_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_true_divide_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_unflatten_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_unfold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_view_as_complex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjp_zero__cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_CubeGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_T_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_addmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_any_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_argmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_as_strided_partial_views_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_cartesian_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_cat_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_char_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_constant_pad_nd_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_double_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_empty_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_expand_as_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_fft_fftshift_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_fft_hfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_fft_ifft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_fft_rfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_flatten_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_float_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_frac_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_index_put_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_le_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_eigvalsh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_lu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_matrix_power_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_linalg_pinv_hermitian_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_log_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_logcumsumexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_logit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_mT_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_masked_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_masked_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_masked_sum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_matrix_exp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_max_reduction_with_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_min_reduction_with_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nanquantile_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nextafter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_adaptive_avg_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_batch_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_channel_shuffle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_conv2d_stride_depthwise_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_conv2d_stride_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_conv_transpose1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_dropout3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_gaussian_nll_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_hinge_embedding_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_interpolate_nearest_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_kl_div_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_margin_ranking_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_nn_functional_triplet_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_outer_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_polygamma_polygamma_n_0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_polygamma_polygamma_n_4_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_randint_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_ravel_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_repeat_interleave_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_scalar_tensor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_sigmoid_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_signal_windows_exponential_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_signal_windows_general_cosine_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_special_airy_ai_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_special_i0e_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_special_modified_bessel_k1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_special_shifted_chebyshev_polynomial_u_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_special_shifted_chebyshev_polynomial_v_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_special_shifted_chebyshev_polynomial_w_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_squeeze_multiple_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_sub_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_take_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_trapz_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_unfold_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_uniform_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_unique_consecutive_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_unique_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_var_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjp_zero__cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvjpvmap_SortGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_NumpyCubeAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_SelectGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_ZeroGradientsGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap___radd___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap___rmul___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_acosh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_addcdiv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_addcmul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_addmm_decomposed_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_addr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_alias_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_argmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_as_strided_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_atleast_2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_bernoulli_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_bfloat16_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_bmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_broadcast_to_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_cov_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_cummin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_digamma_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_fill_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_float_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_gather_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_grid_sampler_2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_half_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_hsplit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_index_put_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_index_reduce_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_kthvalue_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_linalg_vector_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_masked_argmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_masked_logaddexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_max_pool2d_with_indices_backward_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_max_reduction_no_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_max_reduction_with_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_min_reduction_no_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nansum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_narrow_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_avg_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_batch_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_bilinear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_conv2d_stride_groups_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_conv2d_stride_padding_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_elu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_grid_sample_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_hardshrink_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_interpolate_linear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_interpolate_nearest_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_l1_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_leaky_relu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_linear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_logsigmoid_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_max_unpool2d_grad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_mse_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_pad_circular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_pad_constant_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_pad_replicate_negative_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_rrelu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_selu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_ones_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_ormqr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_polygamma_polygamma_n_0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_quantile_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_rand_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_randint_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_reshape_as_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_rsqrt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_scatter_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_scatter_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_slice_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_special_chebyshev_polynomial_w_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_special_modified_bessel_i1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_special_shifted_chebyshev_polynomial_w_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_square_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_stft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_sum_to_size_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmap_tril_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmapvmap_CubeGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vjpvmapvmap_SelectGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_ForwardHasDefaultArgsAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_MulGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_NumpyCubeAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_NumpyCubeNotComposableAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_NumpyExpMarkDirtyAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_NumpyMulAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_ScaleGradGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_SortGenVmapAutogradFunction_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_ZeroGradientsGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad___getitem___cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad___getitem___functorch_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad__chunk_cat_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad__native_batch_norm_legit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad__unsafe_masked_index_put_accumulate_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad__unsafe_masked_index_put_accumulate_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_abs_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_acosh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_acosh_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_addcmul_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_addmv_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_angle_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_argmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_argwhere_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_as_strided_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_atanh_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_byte_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_cartesian_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_cdouble_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_char_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_clamp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_column_stack_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_cosh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_cumulative_trapezoid_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_dsplit_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_empty_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_empty_like_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_exp_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_exponential_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_fftn_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_hfft_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_hfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_irfftn_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fft_rfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_flatten_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_float_power_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_fmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_full_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_geometric_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_geqrf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_igamma_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_inner_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_int_functorch_no_channels_last_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_isnan_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_jiterator_4inputs_with_extra_args_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_jiterator_binary_return_by_ref_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_cholesky_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_det_singular_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_diagonal_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_eig_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_matrix_rank_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_matrix_rank_hermitian_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_linalg_norm_subgradients_at_zero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_log_softmax_with_dtype_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_logical_not_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_logical_xor_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_lu_unpack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_mH_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_masked_fill_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_masked_median_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_masked_var_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_movedim_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nan_to_num_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_native_batch_norm_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_conv2d_stride_depthwise_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_dropout_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_elu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_embedding_bag_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_fractional_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_glu_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_hardtanh_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_instance_norm_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_interpolate_nearest-exact_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_interpolate_nearest-exact_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_l1_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_max_unpool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_normalize_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_pad_circular_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_prelu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_rms_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_rms_norm_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_scaled_dot_product_attention_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_smooth_l1_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_softmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_tanhshrink_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_triplet_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_nn_functional_upsample_nearest_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_norm_inf_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_ones_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_ops_aten__new_zeros_with_same_feature_meta_functorchonly_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_outer_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_pinverse_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_polygamma_polygamma_n_1_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_polygamma_polygamma_n_4_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_put_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_quantile_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_ravel_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_real_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_renorm_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_resolve_neg_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_roll_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_rot90_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_round_decimals_0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_round_decimals_3_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_round_decimals_3_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_select_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_signal_windows_bartlett_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_signal_windows_blackman_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_signal_windows_kaiser_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_signbit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_sinh_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_softmax_with_dtype_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_bessel_y0_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_chebyshev_polynomial_v_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_log_ndtr_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_modified_bessel_i1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_polygamma_special_polygamma_n_0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_shifted_chebyshev_polynomial_w_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_spherical_bessel_j0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_special_xlog1py_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_sqrt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_std_unbiased_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_svd_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_take_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_tan_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_tan_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_tensor_split_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_topk_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_trapezoid_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_trapezoid_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_true_divide_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_true_divide_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_unfold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_unique_consecutive_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_var_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_var_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_var_mean_unbiased_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_vdot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_view_as_cuda_float64, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmap_autograd_grad_where_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_NumpyMulAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_ScaleGradGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_SelectGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall___getitem___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall___rpow___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall__chunk_cat_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_acosh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_addmm_decomposed_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_addmv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_alias_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_argmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_as_strided_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_asin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_broadcast_shapes_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_broadcast_to_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_byte_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_cauchy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_diagonal_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_diff_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_div_no_rounding_mode_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_dot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_erfc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_expand_as_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_expand_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_flipud_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_float_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_frac_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_NumpySortAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_NumpyTakeAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule___rsub___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_addcmul_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_broadcast_to_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_chalf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_clone_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_cumsum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_div_floor_rounding_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_empty_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_erf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_erfinv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_eye_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_fft_fftshift_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_fliplr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_float_power_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_floor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_frac_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_ge_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_half_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_hstack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_index_reduce_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_index_reduce_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_index_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_isin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_isreal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_jiterator_binary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_lerp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_linalg_det_singular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_log10_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_logcumsumexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_lu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_masked_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_masked_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_masked_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_mvlgamma_mvlgamma_p_1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_mvlgamma_mvlgamma_p_5_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_new_empty_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_new_full_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nextafter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_alpha_dropout_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_avg_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_batch_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_embedding_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_hardtanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_max_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_max_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_max_unpool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_pad_constant_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_pad_replicate_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_softmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_softshrink_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_nn_functional_threshold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_norm_inf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_normal_in_place_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_ops_aten_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_polygamma_polygamma_n_2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_pow_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_rand_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_randint_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_reshape_as_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_resize__cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_scalar_tensor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_scatter_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_scatter_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_signal_windows_cosine_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_signal_windows_general_hamming_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_sinc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_sort_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_special_shifted_chebyshev_polynomial_v_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_split_list_args_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_squeeze_multiple_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_to_sparse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_unbind_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_uniform_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_unique_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_has_batch_rule_unsafe_split_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_isreal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_jiterator_unary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_lerp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_linalg_cond_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_linalg_det_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_linalg_inv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_linalg_ldl_factor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_linalg_lstsq_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_linalg_matrix_power_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_linalg_qr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_log1p_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_logsumexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_masked_softmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_masked_std_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_masked_var_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_min_reduction_no_dim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_minimum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nanmedian_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_narrow_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_native_batch_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nextafter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_avg_pool2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_conv2d_stride_padding_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_conv2d_strided_padding_dilation_with_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_elu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_hardsigmoid_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_instance_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_kl_div_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_max_unpool3d_grad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_mish_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_mse_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_mse_loss_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_pad_reflect_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_rms_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_scaled_dot_product_attention_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_softmin_with_dtype_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_softsign_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_nn_functional_unfold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_ops_aten__new_zeros_with_same_feature_meta_functorchonly_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_put_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_rad2deg_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_real_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_resize_as__cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_roll_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_short_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_slice_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_special_bessel_y0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_split_list_args_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_squeeze_multiple_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_tanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_unique_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_unsafe_chunk_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_unsqueeze_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_var_mean_unbiased_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpall_view_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_H_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp___getitem___functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp___radd___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp___rmul___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_acosh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_atan2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_baddbmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_byte_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_cholesky_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_chunk_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_clamp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_cosh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_cov_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_cumprod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_div_no_rounding_mode_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_div_trunc_rounding_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_empty_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_equal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_exp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_fft_hfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_fft_ifftshift_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_fft_ihfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_fft_rfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_fft_rfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_float_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_fmin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_full_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_gradient_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_half_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_half_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_igamma_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_index_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_isreal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_cholesky_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_lstsq_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_matrix_rank_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_pinv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_linalg_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_logaddexp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_logdet_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_lt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_mH_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_masked_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_masked_fill_functorch_Scalar_only_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_masked_sum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_median_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_mv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_adaptive_max_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_conv2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_conv2d_stride_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_conv2d_stride_padding_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_conv3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_cosine_embedding_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_dropout3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_fractional_max_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_pad_constant_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_nn_functional_pad_replicate_negative_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_norm_inf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_ones_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_positive_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_rand_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_randint_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_reciprocal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_reshape_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_resize_as__cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_scalar_tensor_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_searchsorted_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_short_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_signal_windows_blackman_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_signal_windows_hamming_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_softmax_with_dtype_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_special_ndtri_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_special_zeta_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_sqrt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_squeeze_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_sub_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_t_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_torch_ops_aten__efficient_attention_forward_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvjp_view_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvmap_NumpyCubeNotComposableAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvmap_NumpyMulAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapjvpvmap_SortGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_H_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_MulGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_NumpySortAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_bool_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_broadcast_shapes_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_cdouble_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_ceil_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_chalf_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_char_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_clamp_max_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_cos_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_count_nonzero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_cummin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_dsplit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_erfinv_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_fft_fftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_fft_ifft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_fft_irfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_MulGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule__softmax_backward_data_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule__upsample_bilinear2d_aa_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_aminmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_angle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_asinh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_atan2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_atanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_baddbmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_bool_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_cartesian_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_cholesky_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_diag_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_diagonal_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_div_floor_rounding_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_expand_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_fft_irfft_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_flip_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_gt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_half_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_i0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_index_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_jiterator_unary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_linalg_matrix_power_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_linalg_norm_subgradients_at_zero_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_linalg_qr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_log1p_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_log_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_logspace_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_lu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_masked_amin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_masked_median_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_mode_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_movedim_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_mvlgamma_mvlgamma_p_1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_native_dropout_backward_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_adaptive_avg_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_adaptive_max_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_adaptive_max_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_alpha_dropout_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_bilinear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_conv2d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_conv2d_stride_padding_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_gelu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_interpolate_linear_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_layer_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_logsigmoid_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_max_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_mse_loss_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_multilabel_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_pixel_unshuffle_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_rrelu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nn_functional_softplus_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_nonzero_static_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_norm_fro_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_ones_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_polygamma_polygamma_n_4_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_qr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_randn_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_resize__cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_rot90_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_round_decimals_3_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_rsqrt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_rsub_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_sinh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_special_modified_bessel_i1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_special_modified_bessel_k0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_special_scaled_modified_bessel_k0_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_split_with_sizes_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_tile_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_triangular_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_unsqueeze_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_view_as_complex_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_has_batch_rule_zeros_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_index_reduce_prod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_int_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_isin_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_jiterator_2inputs_2outputs_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_jiterator_binary_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_linalg_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_linalg_solve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_linalg_solve_triangular_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_logspace_tensor_overload_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_lu_unpack_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_masked_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_masked_cumprod_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_masked_log_softmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_masked_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_masked_select_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_masked_std_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_masked_sum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_matrix_exp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_mm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nan_to_num_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nanmean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nanmedian_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_narrow_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_neg_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_new_empty_strided_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_new_ones_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_adaptive_max_pool3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_batch_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_conv1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_hardsigmoid_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_hardswish_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_instance_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_kl_div_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_margin_ranking_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_pad_constant_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_pdist_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_relu6_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_scaled_dot_product_attention_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_selu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_softsign_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_nn_functional_unfold_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_norm_fro_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_ormqr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_polygamma_polygamma_n_1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_randn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_ravel_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_renorm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_repeat_interleave_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_reshape_as_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_round_decimals_3_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_scatter_reduce_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_signal_windows_bartlett_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_signbit_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_slice_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_slice_scatter_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_special_entr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_special_zeta_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_std_mean_unbiased_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_sum_to_size_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_tanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_to_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_transpose_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_unsafe_split_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_var_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjp_zeros_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_NumpyCubeNotComposableAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_NumpySortAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_ScaleGradGenVmapAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp___radd___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp___rdiv___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp___rmod___cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp__segment_reduce_offsets_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_all_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_amax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_argmax_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_atleast_3d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_char_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_clamp_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_combinations_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_digamma_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_dot_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_erfc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_fft_hfft2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_fft_irfftn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_fliplr_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_float_functorch_no_channels_last_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_ge_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_gt_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_index_add_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_linalg_norm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_linalg_tensorsolve_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_log2_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_masked_cumsum_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_mvlgamma_mvlgamma_p_5_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_avg_pool1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_conv1d_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_conv2d_strided_padding_dilation_no_bias_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_feature_alpha_dropout_without_train_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_hardsigmoid_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_interpolate_area_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_leaky_relu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_max_unpool3d_grad_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_mse_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_soft_margin_loss_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_nn_functional_upsample_nearest_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_norm_nuc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_normal_number_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_ops_aten_index_put_functorch_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_outer_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_pinverse_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_rand_like_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_randn_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_ravel_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_repeat_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_resolve_neg_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_signal_windows_general_cosine_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_softmax_with_dtype_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_sparse_mm_reduce_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_sparse_sampled_addmm_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_special_bessel_y1_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_special_i0e_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_split_with_sizes_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_square_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_tanh_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_triu_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_trunc_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_unbind_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_unfold_copy_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvjp_var_mean_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvmap_NumpyCubeNotComposableAutogradFunction_cuda_float32, test/functorch/test_ops.py::TestOperatorsCUDA::test_vmapvjpvmap_NumpyMulAutogradFunction_cuda_float32 2024-08-07T18:18:05.5213311Z 2024-08-07T18:18:09.3935981Z Running test_ops 7/11 ... [2024-08-07 18:18:09.393069] 2024-08-07T18:18:09.3940321Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops.py', '-m', 'not serial', '--shard-id=7', '--num-shards=11', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:18:09.393603] 2024-08-07T18:23:59.9017090Z 2024-08-07T18:23:59.9020608Z test_ops 2/11 was successful, full logs can be found in artifacts with path test/test-reports/test_ops_2.11_88df29a74f745b59_.log 2024-08-07T18:24:00.0218056Z Running 3024 items in this shard: test/test_ops.py::TestCommonCUDA::test_compare_cpu_H_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu___rmod___cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_cauchy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_diagonal_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_expand_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_eye_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_hstack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_igamma_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_igammac_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_new_full_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_nextafter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_nn_functional_nll_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_repeat_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_reshape_as_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_rot90_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_unsqueeze_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_var_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_view_as_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_xlogy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_atan2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_bernoulli_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_combinations_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_cummax_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_dist_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_div_floor_rounding_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_dsplit_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_empty_strided_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_full_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_geometric_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_linalg_eig_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_linalg_householder_product_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_logdet_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_masked_cumprod_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_masked_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_max_pool2d_with_indices_backward_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_max_reduction_no_dim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_mm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_narrow_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_native_batch_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_new_full_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nextafter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_conv2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_feature_alpha_dropout_without_train_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_interpolate_bicubic_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_max_pool3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_max_unpool2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_multi_head_attention_forward_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_pad_constant_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_pad_reflect_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_softmin_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nonzero_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_norm_fro_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_norm_nuc_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_normal_number_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_quantile_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_resolve_neg_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_scatter_reduce_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_select_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_to_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_uniform_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_unsqueeze_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_var_unbiased_cuda_float32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_abs_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_as_strided_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_asin_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_char_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_dstack_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_hstack_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_index_copy_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_log_softmax_with_dtype_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_masked_fill_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_nn_functional_conv2d_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_nn_functional_conv_transpose3d_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_prod_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_rsqrt_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_sub_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_unsafe_chunk_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_view_as_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_view_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_dtypes_T_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes___getitem___cuda, test/test_ops.py::TestCommonCUDA::test_dtypes___rmul___cuda, test/test_ops.py::TestCommonCUDA::test_dtypes___ror___cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__batch_norm_with_update_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs__conversions_char_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_alias_copy_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_allclose_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_arange_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_as_strided_copy_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_atan2_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_bitwise_or_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_div_trunc_rounding_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_dot_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_equal_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_erf_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_expand_copy_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_fft_rfft2_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_isfinite_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_lgamma_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_log1p_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_log2_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_log_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_log_normal_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_meshgrid_variadic_tensors_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_nn_functional_hinge_embedding_loss_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_nn_functional_relu_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_nn_functional_softshrink_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_nn_functional_tanhshrink_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_nn_functional_triplet_margin_loss_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_remainder_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_sigmoid_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_special_i0e_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_special_log_softmax_with_dtype_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_special_multigammaln_mvlgamma_p_1_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_square_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_sub_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_triu_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_trunc_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_xlogy_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_addcmul_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_addmm_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_amax_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_amin_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_asin_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_bincount_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_cat_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_cdist_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_char_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_cholesky_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_count_nonzero_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_erfc_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_erfinv_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_fft_rfft2_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_flip_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_flipud_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_half_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_heaviside_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_hstack_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_index_reduce_amax_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_inner_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_isneginf_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_jiterator_binary_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_lcm_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_linalg_ldl_factor_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_linalg_norm_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_logical_not_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_masked_amin_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_masked_var_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_median_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_minimum_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_mm_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_mv_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_native_dropout_backward_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_neg_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_alpha_dropout_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_instance_norm_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_max_unpool2d_grad_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_mish_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_normalize_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_rrelu_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_norm_inf_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_ormqr_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_permute_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_resolve_neg_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_rot90_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_round_decimals_3_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_scatter_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_scatter_reduce_mean_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_select_scatter_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_softmax_with_dtype_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_special_laguerre_polynomial_l_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_special_modified_bessel_i1_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_special_ndtr_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_special_scaled_modified_bessel_k1_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_square_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_unfold_copy_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_uniform_cuda, test/test_ops.py::TestCommonCUDA::test_errors___rand___cuda, test/test_ops.py::TestCommonCUDA::test_errors___rdiv___cuda, test/test_ops.py::TestCommonCUDA::test_errors_aminmax_cuda, test/test_ops.py::TestCommonCUDA::test_errors_as_strided_scatter_cuda, test/test_ops.py::TestCommonCUDA::test_errors_complex_cuda, test/test_ops.py::TestCommonCUDA::test_errors_fft_fftn_cuda, test/test_ops.py::TestCommonCUDA::test_errors_fft_irfft2_cuda, test/test_ops.py::TestCommonCUDA::test_errors_fft_rfftn_cuda, test/test_ops.py::TestCommonCUDA::test_errors_fmax_cuda, test/test_ops.py::TestCommonCUDA::test_errors_gradient_cuda, test/test_ops.py::TestCommonCUDA::test_errors_kthvalue_cuda, test/test_ops.py::TestCommonCUDA::test_errors_median_cuda, test/test_ops.py::TestCommonCUDA::test_errors_nn_functional_l1_loss_cuda, test/test_ops.py::TestCommonCUDA::test_errors_nn_functional_multi_margin_loss_cuda, test/test_ops.py::TestCommonCUDA::test_errors_scatter_cuda, test/test_ops.py::TestCommonCUDA::test_errors_special_chebyshev_polynomial_t_cuda, test/test_ops.py::TestCommonCUDA::test_errors_special_shifted_chebyshev_polynomial_u_cuda, test/test_ops.py::TestCommonCUDA::test_errors_triu_cuda, test/test_ops.py::TestCommonCUDA::test_multiple_devices___rdiv___cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices__chunk_cat_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_allclose_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_amax_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_argwhere_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_as_strided_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_as_strided_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_block_diag_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_bmm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_cdist_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_ceil_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_cholesky_solve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_combinations_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_corrcoef_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_cummin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_cumulative_trapezoid_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_diagonal_copy_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_erfc_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_fft_fftshift_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_fft_ifft2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_fft_ihfft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_fft_rfftn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_fmax_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_frexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_gather_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_igammac_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_index_reduce_mean_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_jiterator_binary_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_jiterator_unary_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_linalg_lu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_linalg_pinv_singular_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_linalg_qr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_linalg_vander_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_linalg_vecdot_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_long_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_masked_amin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_masked_std_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_max_reduction_no_dim_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_meshgrid_list_of_tensors_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_min_reduction_with_dim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_mode_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_mul_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_narrow_copy_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nextafter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_adaptive_max_pool1d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_adaptive_max_pool3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_alpha_dropout_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_celu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_conv_transpose3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_feature_alpha_dropout_with_train_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_hinge_embedding_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_margin_ranking_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_max_unpool2d_grad_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_max_unpool3d_grad_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_normalize_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_pad_circular_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_pad_replicate_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_prelu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_relu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_triplet_margin_loss_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_upsample_nearest_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_polygamma_polygamma_n_1_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_rad2deg_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_ravel_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_resize_as__cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_resolve_neg_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_scatter_reduce_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_scatter_reduce_prod_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_scatter_reduce_prod_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_signal_windows_gaussian_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_sin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_slice_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_chebyshev_polynomial_u_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_i0e_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_modified_bessel_k0_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_polygamma_special_polygamma_n_0_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_sum_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_t_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_tanh_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_tile_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_to_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_to_sparse_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_torch_ops_aten__efficient_attention_forward_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_unique_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_unsafe_split_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_view_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_vsplit_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_vstack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_zeros_like_cuda_int64, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values___radd___cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_addr_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_all_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_angle_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_diag_embed_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_digamma_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_fft_fft_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_fft_ifftshift_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_fft_irfftn_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_fmax_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_isclose_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_isfinite_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_jiterator_2inputs_2outputs_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_masked_fill_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_max_binary_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_nan_to_num_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_new_empty_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_permute_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_polygamma_polygamma_n_3_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_sigmoid_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_softmax_with_dtype_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_special_chebyshev_polynomial_t_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_special_hermite_polynomial_h_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_special_legendre_polynomial_p_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_squeeze_multiple_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_sum_to_size_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_t_copy_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_to_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_tril_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_true_divide_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_unique_consecutive_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_view_as_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_zeros_like_cuda_bool, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_T_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples__unsafe_masked_index_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples__unsafe_masked_index_put_accumulate_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_add_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_as_strided_copy_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_as_strided_scatter_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_asin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_atan_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_atanh_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_atleast_3d_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_bfloat16_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_block_diag_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_bmm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_bool_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_broadcast_tensors_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_chunk_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_chunk_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_clamp_max_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_combinations_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_complex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_constant_pad_nd_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cov_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cov_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cross_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cummax_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cumprod_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cumulative_trapezoid_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_deg2rad_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_diag_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_div_floor_rounding_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_double_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_dstack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_einsum_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_empty_strided_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_empty_strided_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_eq_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_exp2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fft_fftn_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fft_hfft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fft_ifftshift_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fft_ihfft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fft_ihfftn_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fft_irfft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fft_rfft_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fft_rfftn_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_flatten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_flatten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fliplr_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fliplr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_floor_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fmin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_full_like_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_histc_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_index_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_index_select_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_int_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_isfinite_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_jiterator_4inputs_with_extra_args_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_jiterator_binary_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_det_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_ldl_solve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_lu_factor_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_multi_dot_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_pinv_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_tensorinv_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_tensorinv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_tensorsolve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_vander_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linspace_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linspace_tensor_overload_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_log10_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_logaddexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_logdet_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_logical_and_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_logical_not_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_lu_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_argmin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_cumprod_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_cumprod_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_cumsum_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_logsumexp_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_mean_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_std_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_var_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_var_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_max_binary_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_max_reduction_with_dim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_meshgrid_variadic_tensors_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_min_reduction_with_dim_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_msort_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nanmean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nansum_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_neg_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_neg_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_new_empty_strided_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_adaptive_max_pool1d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_avg_pool1d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_cosine_similarity_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_ctc_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_embedding_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_fractional_max_pool3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_grid_sample_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_interpolate_nearest_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_linear_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_max_unpool2d_grad_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_pad_constant_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_pad_replicate_negative_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_pairwise_distance_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_relu_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_threshold_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_triplet_margin_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_triplet_margin_loss_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_triplet_margin_with_distance_loss_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_upsample_nearest_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_ormqr_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_polygamma_polygamma_n_4_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_randn_like_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_ravel_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_remainder_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_repeat_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_roll_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_round_decimals_0_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_round_decimals_neg_3_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_rsub_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_scatter_reduce_amin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_scatter_reduce_mean_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_select_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_signal_windows_cosine_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_signal_windows_general_hamming_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_sparse_sampled_addmm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_airy_ai_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_bessel_y0_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_chebyshev_polynomial_t_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_entr_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_erfcx_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_i0e_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_i1_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_i1e_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_modified_bessel_i1_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_scaled_modified_bessel_k1_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_split_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_split_with_sizes_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_squeeze_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_std_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_std_mean_unbiased_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_std_unbiased_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_svd_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_unfold_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_unfold_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_view_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_vstack_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_xlogy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_zero__cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_zeros_like_cuda_float32, test/test_ops.py::TestCommonCUDA::test_numpy_ref_argwhere_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_cat_cuda_int64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_clamp_cuda_int64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_diff_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_equal_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_jiterator_2inputs_2outputs_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_numpy_ref_linalg_cross_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_numpy_ref_meshgrid_variadic_tensors_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_meshgrid_variadic_tensors_cuda_int64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_nn_functional_gelu_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_nn_functional_smooth_l1_loss_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_roll_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_roll_cuda_int64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_signal_windows_blackman_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_signal_windows_general_cosine_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_signal_windows_general_hamming_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_squeeze_multiple_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_out_H_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out___getitem___cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__native_batch_norm_legit_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs__conversions_bool_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs__conversions_complex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs__conversions_long_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_asin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_bitwise_left_shift_cuda_int64, test/test_ops.py::TestCommonCUDA::test_out__refs_bucketize_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_cat_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_constant_pad_nd_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_count_nonzero_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_div_no_rounding_mode_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_dstack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_erf_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_fft_ihfft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_flipud_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_float_power_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_hsplit_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_istft_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out__refs_item_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_linalg_vecdot_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_log2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_logspace_tensor_overload_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_neg_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_nn_functional_hardshrink_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_nn_functional_poisson_nll_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_nn_functional_relu6_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_pow_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_repeat_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_round_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_select_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_sinh_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_special_entr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_special_log_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_special_multigammaln_mvlgamma_p_1_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_special_ndtr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_special_zeta_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_squeeze_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_sub_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_to_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_trace_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_unflatten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_vstack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_add_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_alias_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_all_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_arange_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_argmin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_bernoulli_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_bitwise_right_shift_cuda_int64, test/test_ops.py::TestCommonCUDA::test_out_cauchy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_clamp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_combinations_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_dstack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_fft_fftshift_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_fft_hfft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_fft_ifftshift_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_fft_irfftn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_flip_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_fmin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_full_like_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_gather_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_grid_sampler_2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_gt_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_half_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_igamma_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_index_reduce_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_isnan_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_isreal_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_jiterator_binary_return_by_ref_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_ldexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_linalg_ldl_solve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_linalg_lu_solve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_linalg_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_linalg_norm_subgradients_at_zero_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_lt_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_masked_argmin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_masked_normalize_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_masked_var_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_max_reduction_with_dim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_min_reduction_no_dim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_min_reduction_with_dim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_mm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_msort_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nanmedian_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_new_ones_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_adaptive_avg_pool1d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_conv2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_cosine_embedding_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_elu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_layer_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_max_unpool1d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_max_unpool3d_grad_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_mse_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_nll_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_silu_complex_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_smooth_l1_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_tanhshrink_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_pca_lowrank_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_put_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_acos_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_acosh_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_addcdiv_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_addcmul_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_addmv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_alias_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_atan2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_cat_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_cummin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_erfinv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_fft_hfft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_fft_ifft2_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_fft_ihfft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_fft_rfft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_index_reduce_prod_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_cholesky_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_eig_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_lu_solve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_matrix_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_pinv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_pinv_hermitian_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_log_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_logspace_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_masked_select_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_mm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_mv_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_normal_number_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_quantile_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_round_decimals_0_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_scatter_reduce_amin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_scatter_reduce_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_scatter_reduce_sum_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_sgn_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_special_ndtr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_special_ndtri_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_special_xlog1py_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_square_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_sub_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_sub_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_take_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_unsqueeze_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_where_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_roll_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_searchsorted_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_sign_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_signal_windows_general_cosine_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_split_with_sizes_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_squeeze_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_svd_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_svd_lowrank_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_tanh_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_vdot_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_warning___rxor___cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_broadcast_to_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_diag_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_dot_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_expm1_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_fft_fftshift_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_fft_ifftshift_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_fill_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_fliplr_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_fmax_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_linalg_svd_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_linspace_tensor_overload_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_log_softmax_with_dtype_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_logspace_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_narrow_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_nn_functional_dropout_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_nn_functional_mish_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_pow_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_randn_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_reshape_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_rot90_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_special_entr_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_special_log_ndtr_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_special_logit_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_t_copy_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_to_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_trunc_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_var_mean_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_vstack_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_atleast_1d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_bitwise_and_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_block_diag_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_bool_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_byte_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_cauchy_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_char_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_cholesky_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_dist_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_empty_strided_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_expand_as_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_expand_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_fft_fftn_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_fft_hfft_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_fft_hfftn_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_fft_ifftshift_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_fft_rfft_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_geometric_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_gradient_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_histc_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_igamma_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_index_reduce_mean_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_isreal_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_istft_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_lgamma_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_linalg_ldl_factor_ex_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_log_softmax_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_logaddexp2_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_logical_not_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_lu_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_masked_logaddexp_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_masked_softmax_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_masked_softmin_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_max_binary_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_meshgrid_variadic_tensors_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_min_binary_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nansum_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_narrow_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_adaptive_max_pool1d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_celu_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_dropout3d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_embedding_bag_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_embedding_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_max_pool1d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_max_pool2d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_max_pool3d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_max_unpool3d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_scaled_dot_product_attention_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_softplus_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_norm_fro_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_normal_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_prod_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_reshape_as_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_resolve_conj_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_select_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_signal_windows_hamming_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_signal_windows_hann_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_signbit_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_sinh_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_slice_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_sparse_sampled_addmm_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_special_modified_bessel_i0_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_sqrt_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_to_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_torch__scaled_mm_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_tril_indices_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_triu_indices_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_view_cuda, test/test_ops.py::TestCommonCUDA::test_out_zero__cuda_float32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_asin_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_asinh_cuda_bool, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_atanh_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_cos_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_div_no_rounding_mode_cuda_int16, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_div_no_rounding_mode_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_erfinv_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_erfinv_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_erfinv_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_expm1_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_i0_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_i0_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_i0_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_ldexp_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_lgamma_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_log1p_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_log1p_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_logit_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_masked_mean_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_masked_var_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_mvlgamma_mvlgamma_p_5_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_polygamma_polygamma_n_2_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_polygamma_polygamma_n_3_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_rad2deg_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_rad2deg_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_chebyshev_polynomial_t_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_chebyshev_polynomial_v_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_hermite_polynomial_he_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_legendre_polynomial_p_cuda_bool, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_shifted_chebyshev_polynomial_v_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_shifted_chebyshev_polynomial_w_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_sqrt_cuda_int16, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_tan_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_tanh_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_tanh_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_xlogy_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_T_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_T_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_bfloat16_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_bfloat16_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_bool_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_byte_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_byte_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_byte_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_byte_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_cdouble_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_cfloat_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_double_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_float_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_float_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_half_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_half_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_int_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_long_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_short_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_short_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_abs_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_abs_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_abs_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_acos_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_acos_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_acos_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_add_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_addcdiv_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_addcdiv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_addcmul_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_addr_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_addr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_addr_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_addr_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_alias_copy_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_alias_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_all_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_all_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_allclose_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_amax_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_amax_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_amax_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_amin_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_arange_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_arange_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_partial_views_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_partial_views_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_partial_views_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_scatter_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_asinh_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_asinh_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atan2_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atan_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atanh_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atanh_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atanh_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atleast_1d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atleast_2d_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atleast_2d_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atleast_3d_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atleast_3d_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_bitwise_and_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_bitwise_not_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_bitwise_or_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_bitwise_right_shift_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_broadcast_tensors_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_bucketize_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_cat_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_cat_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ceil_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_chunk_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_clamp_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_clamp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_clamp_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_clamp_min_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_clone_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_column_stack_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_conj_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_constant_pad_nd_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_constant_pad_nd_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_contiguous_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_copysign_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_cos_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_count_nonzero_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_cumprod_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_cumsum_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_deg2rad_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diag_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diag_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diagonal_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diagonal_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_digamma_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_div_no_rounding_mode_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_dot_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_dsplit_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_dstack_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_dstack_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_dstack_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_empty_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_empty_like_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_empty_strided_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_eq_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_equal_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_erf_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_erfinv_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_erfinv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_exp2_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_exp_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_exp_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expand_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expand_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expand_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expand_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_exponential_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_eye_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_fft_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_fftshift_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_hfft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_hfft_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_hfft_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_hfft_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifft2_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifftn_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifftn_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifftshift_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifftshift_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ihfft2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ihfft2_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ihfft_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_irfftn_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_rfft2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_rfft_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_rfftn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fill_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fliplr_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fliplr_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_flipud_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_float_power_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_float_power_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_floor_divide_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fmax_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fmin_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_gcd_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ge_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ge_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_geometric_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_heaviside_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_hstack_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_hypot_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_i0_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_igamma_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_index_copy_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_index_copy_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_index_copy_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_index_fill_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isinf_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isinf_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isinf_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isinf_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isneginf_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isneginf_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isposinf_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isreal_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isreal_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isreal_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_lcm_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_le_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_le_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_lerp_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linalg_cross_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linalg_diagonal_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linalg_diagonal_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linalg_matrix_norm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linspace_tensor_overload_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linspace_tensor_overload_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log10_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log1p_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log2_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log_normal_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log_normal_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log_softmax_with_dtype_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logical_and_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logical_or_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logical_xor_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logical_xor_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logspace_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logsumexp_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_masked_fill_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_masked_fill_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_masked_fill_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_maximum_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_maximum_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_meshgrid_variadic_tensors_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_meshgrid_variadic_tensors_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_minimum_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_minimum_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_movedim_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_mul_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_narrow_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_narrow_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ne_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_neg_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_empty_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_full_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_ones_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_ones_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_zeros_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_zeros_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_alpha_dropout_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_elu_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_hardtanh_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_layer_norm_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_margin_ranking_loss_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_mish_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_pixel_shuffle_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_pixel_shuffle_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_pixel_unshuffle_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_poisson_nll_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_poisson_nll_loss_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_relu_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_selu_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_softmin_with_dtype_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_softplus_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_softshrink_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_tanhshrink_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_normal_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ones_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_permute_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_permute_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_pow_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_rad2deg_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_randn_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ravel_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ravel_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_real_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_reciprocal_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_remainder_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_repeat_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_repeat_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_reshape_as_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_roll_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_round_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_round_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_rsqrt_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_rsqrt_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_rsub_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sgn_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sgn_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sigmoid_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sigmoid_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sigmoid_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sign_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_signbit_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_signbit_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sin_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sinh_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_softmax_with_dtype_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_softmax_with_dtype_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_erfcx_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_erfcx_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_i1e_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_log_ndtr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_log_ndtr_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_log_softmax_with_dtype_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_multigammaln_mvlgamma_p_1_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_ndtr_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_ndtr_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_softmax_with_dtype_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_softmax_with_dtype_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_softmax_with_dtype_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_xlog1py_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_xlog1py_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_xlog1py_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sqrt_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_square_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_squeeze_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sub_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sum_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sum_to_size_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sum_to_size_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sum_to_size_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_t_copy_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_t_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_take_along_dim_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_take_along_dim_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_take_along_dim_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_tensor_split_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_trace_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_true_divide_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_true_divide_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_trunc_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_trunc_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unflatten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unflatten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unflatten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unfold_copy_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unfold_copy_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unsqueeze_copy_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unsqueeze_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unsqueeze_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unsqueeze_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_var_mean_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_as_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_as_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_copy_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_copy_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_vsplit_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_vsplit_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_vstack_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_where_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_xlogy_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_xlogy_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_xlogy_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_zeros_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_zeros_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_T_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_add_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_diagonal_copy_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_diagonal_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_div_trunc_rounding_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_fft_ifftn_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_fft_ihfftn_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_fft_irfft_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_ge_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_igammac_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_index_add_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_logaddexp_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_normal__in_place_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_bfloat16_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_bfloat16_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_bfloat16_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_bool_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_byte_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_cdouble_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_cdouble_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_cfloat_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_cfloat_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_chalf_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_double_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_double_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_float_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_float_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_int_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_int_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_long_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_acos_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_acosh_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_addcmul_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_addcmul_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_addr_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_all_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_amax_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_amax_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_amin_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_copy_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_copy_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_partial_views_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_partial_views_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_partial_views_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_partial_views_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_scatter_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_scatter_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_scatter_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_asin_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_asinh_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atan2_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atan2_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atan_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atanh_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atanh_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atanh_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atleast_1d_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atleast_1d_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atleast_2d_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atleast_3d_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_bitwise_not_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_bitwise_xor_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_shapes_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_tensors_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_tensors_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_tensors_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_to_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_bucketize_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_bucketize_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cat_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cauchy_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ceil_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_chunk_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_clone_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_column_stack_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_column_stack_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_constant_pad_nd_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_constant_pad_nd_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_copysign_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cos_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cos_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cosh_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_count_nonzero_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cumprod_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cumsum_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cumsum_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diag_embed_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diag_embed_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diag_embed_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diagonal_copy_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diagonal_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diagonal_scatter_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_div_no_rounding_mode_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_div_trunc_rounding_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_dot_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_dsplit_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_dsplit_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_dstack_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_dstack_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_empty_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_empty_like_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_empty_strided_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_empty_strided_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_eq_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_eq_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_equal_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_erf_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_exp2_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_exp2_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_exp_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_expand_copy_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_eye_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_fft2_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_fft2_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_fft_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_fft_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_fft_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_fftn_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ifft2_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ifft_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ifft_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ifftn_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ifftn_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ihfft2_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ihfft_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ihfftn_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_irfft2_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_irfft2_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_irfft_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_irfft_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_irfftn_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_irfftn_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_rfftn_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_flatten_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_flatten_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_flip_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fliplr_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_float_power_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_float_power_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_float_power_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_floor_divide_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fmin_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fmod_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_gcd_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ge_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_geometric_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_gt_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_hstack_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_hstack_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_index_add_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_index_add_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_index_select_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isclose_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isnan_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isnan_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_item_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_lcm_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_lcm_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_le_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_le_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_linalg_cross_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_linalg_diagonal_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_linalg_diagonal_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_linalg_norm_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_linalg_vector_norm_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_linspace_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log10_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log_softmax_with_dtype_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log_softmax_with_dtype_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logical_and_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logical_and_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logical_not_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logical_xor_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logspace_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logspace_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logspace_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logspace_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logspace_tensor_overload_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logsumexp_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logsumexp_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_masked_fill_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_masked_fill_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_mean_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_mean_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_meshgrid_list_of_tensors_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_meshgrid_variadic_tensors_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_meshgrid_variadic_tensors_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_meshgrid_variadic_tensors_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_minimum_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_minimum_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_movedim_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_mul_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_mul_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_narrow_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_narrow_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_narrow_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_narrow_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ne_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ne_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_new_empty_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_new_ones_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_new_ones_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_celu_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_channel_shuffle_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_channel_shuffle_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_gelu_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_glu_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_group_norm_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_mse_loss_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_pairwise_distance_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_pdist_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_pixel_shuffle_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_pixel_unshuffle_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_poisson_nll_loss_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_relu_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_softmax_with_dtype_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_softmin_with_dtype_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_softplus_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_tanhshrink_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_tanhshrink_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_threshold_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_norm_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_normal_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_normal_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_permute_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_positive_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_positive_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_positive_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_rad2deg_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_rad2deg_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_randn_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_randn_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_randn_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_real_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_real_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_reciprocal_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_remainder_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_reshape_as_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_reshape_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_rot90_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_round_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_rsqrt_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_rsub_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_rsub_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_select_scatter_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sgn_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sigmoid_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sign_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_signbit_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sin_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sinc_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_softmax_with_dtype_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_softmax_with_dtype_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_bessel_j1_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_bessel_j1_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_bessel_j1_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_entr_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_erfcx_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_i0e_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_i0e_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_i1e_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_log_softmax_with_dtype_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_log_softmax_with_dtype_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_log_softmax_with_dtype_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_log_softmax_with_dtype_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_multigammaln_mvlgamma_p_1_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_multigammaln_mvlgamma_p_1_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_multigammaln_mvlgamma_p_3_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_multigammaln_mvlgamma_p_3_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_multigammaln_mvlgamma_p_5_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_ndtri_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_softmax_with_dtype_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_softmax_with_dtype_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sqrt_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sqrt_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_squeeze_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_stack_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_stack_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_std_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_std_mean_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sub_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sub_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sum_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sum_to_size_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_t_copy_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_t_copy_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_t_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_take_along_dim_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_take_along_dim_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_tanh_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_tensor_split_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_trace_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_trace_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_transpose_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_transpose_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_tril_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_tril_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_true_divide_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_true_divide_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_trunc_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unbind_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unbind_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unbind_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unflatten_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unfold_copy_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unfold_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unsqueeze_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unsqueeze_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_view_as_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_view_copy_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_view_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_vstack_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_where_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_where_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_where_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_xlogy_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_T_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_T_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_bool_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_byte_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_byte_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_cdouble_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_cdouble_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_cfloat_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_cfloat_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_double_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_float_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_long_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_short_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_short_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_short_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_abs_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_abs_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_abs_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_abs_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_acos_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_acos_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_add_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_add_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_addcdiv_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_addr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_alias_copy_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_alias_copy_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_all_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_all_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_amax_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_amax_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_any_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_arange_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_arange_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_copy_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_partial_views_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_partial_views_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_asin_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atan2_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atan2_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atan2_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atan_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atan_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atan_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atanh_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atleast_2d_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atleast_2d_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atleast_2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atleast_2d_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atleast_3d_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atleast_3d_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_bitwise_and_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_bitwise_or_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_bitwise_right_shift_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_block_diag_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_block_diag_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_broadcast_tensors_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_broadcast_tensors_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_broadcast_to_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_bucketize_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cat_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cat_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cat_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cauchy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_chunk_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_chunk_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_clamp_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_clamp_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_clamp_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_clamp_max_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_clone_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_column_stack_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_conj_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_conj_physical_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_constant_pad_nd_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_constant_pad_nd_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_contiguous_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cos_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cos_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cos_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cosh_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_count_nonzero_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diag_embed_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diagonal_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diagonal_scatter_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_digamma_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_div_floor_rounding_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_div_floor_rounding_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_div_trunc_rounding_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_div_trunc_rounding_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_div_trunc_rounding_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_dsplit_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_dstack_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_like_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_like_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_strided_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_eq_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_eq_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_equal_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_erfc_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_exp2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expand_as_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expand_as_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expand_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expand_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expand_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expm1_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expm1_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expm1_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_exponential_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_exponential_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_fft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_fft_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_fftn_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_fftn_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_fftshift_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_hfft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_hfft_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_hfftn_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_hfftn_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ifft2_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ifft_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ifftshift_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ihfft2_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ihfft2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ihfft_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ihfft_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ihfft_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ihfftn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_irfft2_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_irfft_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_rfft2_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_rfft_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_rfft_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fill_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fill_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flatten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flip_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flip_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flip_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fliplr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flipud_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flipud_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flipud_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_float_power_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_floor_divide_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fmin_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fmod_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_ge_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_hsplit_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_hsplit_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_hypot_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_igammac_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_copy_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isclose_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isclose_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isclose_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isfinite_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isfinite_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isneginf_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isneginf_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_le_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linalg_matrix_norm_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linalg_svd_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linalg_vecdot_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_log10_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_log1p_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_log_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_log_normal_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logaddexp_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logical_and_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logical_not_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logical_or_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logical_xor_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logspace_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logspace_tensor_overload_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logspace_tensor_overload_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_masked_fill_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_masked_fill_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_meshgrid_list_of_tensors_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_minimum_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_movedim_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_mul_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_mul_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nan_to_num_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_narrow_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_narrow_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_ne_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_new_empty_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_new_ones_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_new_ones_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_new_zeros_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_new_zeros_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_new_zeros_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_dropout_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_glu_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_hardshrink_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_hardtanh_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_hinge_embedding_loss_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_l1_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_log_softmax_with_dtype_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_log_softmax_with_dtype_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_log_softmax_with_dtype_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_margin_ranking_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_mish_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_relu6_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_relu6_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_relu_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_softmax_with_dtype_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_softmin_with_dtype_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_tanhshrink_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_tanhshrink_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_threshold_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_triplet_margin_loss_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_normal__in_place_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_normal_number_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_permute_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_positive_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_prod_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_randn_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_ravel_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_real_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_reciprocal_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_reshape_as_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_roll_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_roll_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_rot90_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_rot90_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_round_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_rsqrt_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_rsub_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_select_scatter_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_select_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sgn_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sgn_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sign_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_signbit_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sin_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sinc_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sinh_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sinh_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sinh_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_softmax_with_dtype_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_entr_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_entr_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_erfcx_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_i0e_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_i1_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_i1e_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_logit_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_multigammaln_mvlgamma_p_1_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_multigammaln_mvlgamma_p_3_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_ndtri_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_ndtri_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_ndtri_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_softmax_with_dtype_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_xlog1py_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_zeta_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sqrt_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sqrt_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_squeeze_multiple_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_squeeze_multiple_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_stack_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_std_mean_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sub_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sum_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sum_to_size_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_t_copy_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_take_along_dim_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_take_along_dim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_tanh_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_tanh_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_to_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_to_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_trace_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_transpose_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_tril_indices_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_true_divide_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unfold_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unfold_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unfold_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unfold_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unsqueeze_copy_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unsqueeze_copy_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unsqueeze_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unsqueeze_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_var_mean_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_view_as_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_view_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_where_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_zeros_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_bfloat16_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_bfloat16_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_bool_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_byte_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_cfloat_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_cfloat_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_char_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_char_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_complex_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_double_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_double_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_float_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_half_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_half_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_int_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_long_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_long_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_long_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_abs_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_acos_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_acosh_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_acosh_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_add_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_addcmul_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_addcmul_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_addr_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_addr_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_alias_copy_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_alias_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_all_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_amax_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_amin_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_amin_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_any_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_arange_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_arange_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_arange_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_as_strided_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_as_strided_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_asinh_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_asinh_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atan2_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atan_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atan_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atleast_1d_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atleast_1d_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atleast_3d_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atleast_3d_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_bitwise_and_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_bitwise_left_shift_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_bitwise_not_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_block_diag_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_broadcast_tensors_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_broadcast_tensors_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_cat_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_chunk_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_clamp_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_clamp_max_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_clamp_min_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_clone_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_clone_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_conj_physical_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_constant_pad_nd_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_constant_pad_nd_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_constant_pad_nd_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_contiguous_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_contiguous_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_contiguous_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_cumprod_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_cumprod_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_cumprod_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diag_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diag_embed_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diagonal_scatter_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diagonal_scatter_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_div_floor_rounding_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_div_no_rounding_mode_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_div_trunc_rounding_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_dot_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_dstack_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_dstack_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_dstack_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_empty_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_empty_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_empty_like_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_empty_strided_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_equal_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_equal_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_equal_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_erf_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_erfc_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_exp2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_exp2_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_exp_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_exp_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_as_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_as_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_copy_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_fft_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_fftshift_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_hfft2_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_hfftn_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ifft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ifftn_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ifftshift_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ifftshift_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ihfft2_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ihfft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_irfft2_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_irfft_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_rfft2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_rfft_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_rfft_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fill_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fliplr_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_float_power_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_float_power_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmax_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmin_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmod_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmod_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmod_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_frac_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_ge_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_geometric_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_geometric_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_gt_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_gt_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_heaviside_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_heaviside_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_heaviside_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_heaviside_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_heaviside_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_heaviside_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_heaviside_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_hsplit_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_hsplit_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_hsplit_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_hsplit_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_hstack_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_hstack_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_hstack_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_i0_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_imag_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_index_add_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_index_copy_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_index_fill_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_index_select_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isinf_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isinf_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isinf_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isnan_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_item_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_lgamma_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_diagonal_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_diagonal_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_norm_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_svd_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linspace_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linspace_tensor_overload_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log10_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log1p_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log1p_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log2_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log_softmax_with_dtype_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log_softmax_with_dtype_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logaddexp2_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logaddexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logical_and_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logical_and_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logical_not_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logspace_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logspace_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logspace_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logsumexp_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_masked_fill_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_maximum_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_meshgrid_variadic_tensors_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_movedim_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_narrow_copy_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_narrow_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_ne_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_neg_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_neg_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_empty_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_empty_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_empty_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_empty_strided_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_empty_strided_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_zeros_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_zeros_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_alpha_dropout_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_dropout_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_gelu_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_glu_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_glu_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_hardshrink_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_hardtanh_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_hardtanh_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_l1_loss_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_l1_loss_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_l1_loss_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_layer_norm_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_layer_norm_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_layer_norm_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_log_softmax_with_dtype_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_log_softmax_with_dtype_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_margin_ranking_loss_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_margin_ranking_loss_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_mish_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_pixel_shuffle_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_pixel_shuffle_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_pixel_unshuffle_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_pixel_unshuffle_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_poisson_nll_loss_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_relu6_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_smooth_l1_loss_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_softmin_with_dtype_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_threshold_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_threshold_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_normal_number_mean_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_permute_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_positive_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_positive_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_positive_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_rad2deg_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_randn_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_ravel_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_ravel_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_real_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_reciprocal_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_remainder_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_remainder_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_repeat_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_repeat_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_reshape_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_rot90_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_round_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_rsqrt_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_select_scatter_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sgn_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sgn_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sigmoid_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sigmoid_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sigmoid_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_signbit_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_signbit_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sin_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sinc_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_bessel_j0_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_bessel_j1_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_entr_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_erfcx_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_i0e_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_i1e_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_logit_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_multigammaln_mvlgamma_p_1_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_multigammaln_mvlgamma_p_3_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_xlog1py_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_zeta_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_zeta_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_zeta_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sqrt_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_square_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_square_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_squeeze_multiple_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_std_mean_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sub_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sub_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sub_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sub_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sum_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sum_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_t_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_t_copy_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_t_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_t_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_t_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_take_along_dim_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_take_along_dim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_take_along_dim_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tan_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tan_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tan_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tanh_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tanh_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tanh_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_to_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_trace_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_trace_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_transpose_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tril_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tril_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tril_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_triu_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_trunc_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_trunc_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unflatten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unfold_copy_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unfold_copy_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unfold_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unfold_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unsqueeze_copy_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unsqueeze_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unsqueeze_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_vdot_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_vdot_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_view_copy_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_view_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_vsplit_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_vsplit_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_vsplit_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_vsplit_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_vstack_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_where_cuda_float64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_H_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager___getitem___cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager___radd___cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager___rsub___cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager__batch_norm_with_update_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_addcdiv_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_addcmul_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_block_diag_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_bool_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_broadcast_to_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_cosh_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_cross_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_diag_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_diagonal_scatter_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_diff_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_dist_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_double_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_exp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_fft_fft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_fft_fftshift_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_fft_hfftn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_fft_irfft2_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_flatten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_float_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_frac_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_hsplit_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_igamma_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_imag_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_index_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_index_put_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_jiterator_unary_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_ldexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_diagonal_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_eig_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_householder_product_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_inv_ex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_ldl_factor_ex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_lu_solve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_matrix_rank_hermitian_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_solve_ex_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_solve_ex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_svd_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_tensorinv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_logaddexp2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_logdet_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_logical_xor_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_logical_xor_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_lu_unpack_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_mT_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_masked_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_masked_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_masked_normalize_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_masked_scatter_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_masked_select_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_masked_std_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_masked_std_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_maximum_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_movedim_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_native_batch_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_native_dropout_backward_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_new_empty_strided_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_adaptive_max_pool1d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_conv1d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_conv2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_conv3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_interpolate_trilinear_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_layer_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_multi_head_attention_forward_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_pixel_unshuffle_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_rms_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_scaled_dot_product_attention_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nonzero_static_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_ormqr_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_ormqr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_outer_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_pow_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_qr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_rand_like_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_randn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_resize_as__cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_rot90_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_rot90_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_scatter_reduce_prod_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_short_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_sinh_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_sparse_sampled_addmm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_special_entr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_split_with_sizes_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_sqrt_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_std_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_std_unbiased_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_tan_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_trapezoid_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_unsafe_chunk_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_var_mean_unbiased_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_view_as_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward__segment_reduce_offsets_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_acosh_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_angle_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_atleast_2d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_atleast_3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_bernoulli_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_bmm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_broadcast_to_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_digamma_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_dist_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_div_floor_rounding_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_expand_as_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_fft_ifftn_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_index_copy_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_index_reduce_prod_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_linalg_lu_factor_ex_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_linalg_norm_subgradients_at_zero_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_masked_cumsum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_masked_fill_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_masked_mean_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_masked_median_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_masked_select_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_minimum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_native_batch_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_celu_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_conv2d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_glu_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_layer_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_mse_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_pad_constant_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_rot90_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_round_decimals_0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_scatter_reduce_prod_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_squeeze_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_trapz_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_T_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input__native_batch_norm_legit_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input__segment_reduce_lengths_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_acos_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_add_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_cdist_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_constant_pad_nd_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_dsplit_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_empty_strided_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_exp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_expand_as_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_expand_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_fft_ifftshift_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_frac_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_i0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_jiterator_unary_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_kron_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_kthvalue_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_linalg_matrix_rank_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_linalg_matrix_rank_hermitian_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_linalg_slogdet_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_linalg_svdvals_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_log_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_lu_unpack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_masked_argmax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_masked_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_masked_var_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nanmean_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_adaptive_max_pool3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_avg_pool3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_conv1d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_cosine_similarity_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_dropout_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_embedding_bag_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_group_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_max_pool3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_max_unpool3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_pad_reflect_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_silu_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_softshrink_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_norm_nuc_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_permute_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_polar_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_randint_like_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_rsub_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_select_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_select_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_signal_windows_cosine_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_signbit_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_sparse_mm_reduce_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_special_bessel_y0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_special_bessel_y1_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_special_erfcx_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_special_hermite_polynomial_he_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_special_ndtri_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_special_shifted_chebyshev_polynomial_t_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_square_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_svd_lowrank_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_tensordot_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_true_divide_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_var_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_view_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad___getitem___cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad___rsub___cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_addcdiv_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_amax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_aminmax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_atleast_2d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_cos_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_cross_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_cummin_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_diagonal_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_empty_like_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_eye_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_hsplit_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_index_put_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_linalg_det_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_linalg_lstsq_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_linalg_multi_dot_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_linalg_tensorsolve_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_linalg_vander_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_log10_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_log_softmax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_masked_logsumexp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_masked_var_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_max_reduction_no_dim_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_mm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_new_empty_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_new_empty_strided_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_new_ones_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_conv1d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_conv_transpose1d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_cosine_similarity_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_ctc_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_hardshrink_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_huber_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_kl_div_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_pad_circular_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_pad_constant_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_rrelu_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_silu_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_soft_margin_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_upsample_nearest_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_polygamma_polygamma_n_3_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_randint_like_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_resolve_conj_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_rsqrt_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_select_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_signal_windows_bartlett_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_sparse_mm_reduce_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_special_chebyshev_polynomial_w_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_stack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_std_mean_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_topk_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_transpose_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_unflatten_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_H_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator___rmatmul___cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator___rmod___cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator__native_batch_norm_legit_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator__segment_reduce_offsets_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_addmv_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_argwhere_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_as_strided_copy_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_as_strided_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_cdist_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_cummin_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_exp2_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_fft_fft_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_fft_ifft_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_index_fill_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_isclose_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_linalg_det_singular_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_linalg_eig_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_linalg_householder_product_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_linalg_inv_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_linalg_matrix_power_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_linalg_norm_subgradients_at_zero_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_linalg_slogdet_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_linalg_vector_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_linspace_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_log_normal_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_logcumsumexp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_mT_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_masked_amax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_masked_argmin_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_masked_var_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_matrix_exp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_mean_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_mvlgamma_mvlgamma_p_5_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_native_dropout_backward_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_new_zeros_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_conv_transpose3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_cross_entropy_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_hinge_embedding_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_interpolate_nearest-exact_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_max_unpool2d_grad_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_multilabel_margin_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_pairwise_distance_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_silu_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_quantile_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_rand_like_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_real_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_reshape_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_roll_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_rsub_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_scatter_add_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_select_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_select_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_sign_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_signal_windows_general_hamming_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_slice_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_sparse_mm_reduce_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_special_entr_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_special_ndtr_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_special_scaled_modified_bessel_k0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_special_scaled_modified_bessel_k1_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_special_shifted_chebyshev_polynomial_u_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_special_zeta_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_split_with_sizes_copy_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_take_along_dim_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_tensor_split_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_tril_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_unique_consecutive_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_vstack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_zero__cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_zeros_like_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay___radd___cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay__native_batch_norm_legit_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_aminmax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_argmax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_argsort_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_cdist_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_diagflat_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_div_floor_rounding_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_einsum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_fft_ifft2_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_fft_ifft_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_fill_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_fliplr_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_flipud_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_hypot_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_index_reduce_mean_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_isneginf_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_le_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_cond_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_lstsq_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_lu_factor_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_pinv_hermitian_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_qr_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_tensorinv_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_log_softmax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_logdet_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_logical_and_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_mT_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_masked_median_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_masked_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_masked_std_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_maximum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_median_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_min_reduction_with_dim_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_mode_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_msort_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nan_to_num_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_narrow_copy_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_adaptive_max_pool2d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_batch_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_conv3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_conv_transpose3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_dropout3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_embedding_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_kl_div_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_logsigmoid_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_margin_ranking_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_max_pool2d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_max_pool3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_max_unpool1d_grad_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_pairwise_distance_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_pixel_unshuffle_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_poisson_nll_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_softmin_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_softsign_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_tanhshrink_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_outer_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_polar_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_polygamma_polygamma_n_4_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_ravel_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_renorm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_resize_as__cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_roll_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_scatter_add_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_sign_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_slice_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_special_entr_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_special_shifted_chebyshev_polynomial_v_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_split_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_stack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_std_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_std_mean_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_std_unbiased_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_stft_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_trapezoid_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_unsqueeze_copy_cuda_float32, test/test_ops.py::TestMathBitsCUDA::test_conj_view___radd___cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__chunk_cat_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs__conversions_bfloat16_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs__conversions_bool_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_acos_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_addr_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_broadcast_to_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_diag_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_empty_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_eq_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_eye_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_fft_ifft2_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_index_add_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_index_copy_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_item_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_linspace_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_log10_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_logical_and_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_new_empty_strided_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_nn_functional_pixel_shuffle_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_nn_functional_tanhshrink_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_squeeze_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_sub_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_true_divide_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_unbind_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_unsqueeze_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_vstack_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_as_strided_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_as_strided_scatter_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_broadcast_to_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_cholesky_solve_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_count_nonzero_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_empty_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_fft_hfftn_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_fill_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_index_copy_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_int_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_jiterator_binary_return_by_ref_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_lerp_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_linalg_norm_subgradients_at_zero_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_linalg_qr_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_logcumsumexp_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_logical_and_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_logical_or_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_masked_fill_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_masked_prod_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_masked_sum_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_matrix_exp_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_meshgrid_list_of_tensors_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_nn_functional_channel_shuffle_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_put_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_resolve_neg_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_select_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_sinc_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_square_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_take_along_dim_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_take_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_tile_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_unfold_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_vsplit_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs__conversions_byte_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs__conversions_cdouble_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs__conversions_chalf_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_abs_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_as_strided_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_asinh_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_cumsum_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_diagonal_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_empty_like_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_empty_strided_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_fft_ifft2_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_fft_ifftn_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_index_copy_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_isinf_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_linalg_vector_norm_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_narrow_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_nn_functional_log_softmax_with_dtype_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_nn_functional_softmin_with_dtype_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_randn_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_stack_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_stft_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_triu_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_atleast_3d_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_byte_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_combinations_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_conj_physical_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_cumulative_trapezoid_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_dist_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_dsplit_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_empty_permuted_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_eye_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_geqrf_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_hsplit_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_isreal_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_linalg_cond_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_linalg_det_singular_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_linalg_ldl_factor_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_logcumsumexp_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_masked_sum_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nn_functional_pixel_unshuffle_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nn_functional_silu_complex_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_norm_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_positive_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_pow_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_randn_like_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_repeat_interleave_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_rsqrt_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_select_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_short_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_sub_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_t_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_unsafe_chunk_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_where_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_view___rpow___cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs__conversions_chalf_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs__conversions_long_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_all_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_as_strided_copy_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_asin_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_block_diag_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_broadcast_tensors_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_ceil_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_conj_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_erfinv_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_fft_hfftn_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_fft_ihfft2_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_ge_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_hsplit_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_igammac_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_linspace_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_logical_and_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_ne_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_new_empty_strided_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_nn_functional_elu_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_nn_functional_glu_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_nn_functional_smooth_l1_loss_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_normal_number_mean_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_prod_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_round_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_sinh_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_tanh_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_tril_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_atan_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_cat_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_combinations_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_diag_embed_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_div_no_rounding_mode_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_div_trunc_rounding_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_equal_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_fft_rfft2_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_fft_rfft_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_flip_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_geqrf_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_gt_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_hypot_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_igamma_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_index_reduce_mean_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_isin_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_jiterator_binary_return_by_ref_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_linalg_eig_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_linalg_matrix_rank_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_linalg_pinv_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_linalg_vecdot_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_linspace_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_logaddexp2_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_logical_not_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_logical_xor_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nanmean_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_ne_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_new_ones_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_batch_norm_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_binary_cross_entropy_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_binary_cross_entropy_with_logits_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_conv1d_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_conv_transpose1d_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_grid_sample_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_interpolate_nearest_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_leaky_relu_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_max_unpool1d_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_max_unpool3d_grad_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_smooth_l1_loss_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_softsign_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_normal_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_normal_number_mean_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_polygamma_polygamma_n_2_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_prod_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_rand_like_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_resolve_conj_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_round_decimals_0_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_scatter_reduce_sum_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_sinc_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_special_bessel_j1_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_special_chebyshev_polynomial_t_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_special_i0e_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_special_i1e_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_special_shifted_chebyshev_polynomial_u_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_stack_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_std_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_stft_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_trapz_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_view_as_complex_cuda_float64, test/test_ops.py::TestFakeTensorCUDA::test_fake___radd___cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_addmm_decomposed_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_argsort_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_as_strided_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_as_strided_scatter_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_atan2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_atan_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast___radd___cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_acos_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_aminmax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_as_strided_scatter_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_atan2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_baddbmm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_broadcast_shapes_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_cartesian_prod_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_cholesky_inverse_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_clamp_min_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_complex_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_copysign_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_corrcoef_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_div_trunc_rounding_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_exp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_fft_hfft_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_grid_sampler_2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_jiterator_binary_return_by_ref_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_linalg_eigh_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_linalg_pinv_hermitian_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_linalg_solve_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_linalg_tensorsolve_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_logsumexp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_masked_argmax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_native_layer_norm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_adaptive_avg_pool3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_avg_pool2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_batch_norm_without_cudnn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_conv_transpose2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_cosine_similarity_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_gaussian_nll_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_interpolate_nearest_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_max_unpool1d_grad_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_max_unpool2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_poisson_nll_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_relu6_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_relu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_silu_complex_cuda_complex64, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_softmin_with_dtype_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_norm_nuc_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_real_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_scatter_reduce_sum_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_sigmoid_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_sign_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_sinh_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_sort_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_bessel_y1_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_chebyshev_polynomial_v_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_legendre_polynomial_p_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_ndtr_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_scaled_modified_bessel_k1_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_xlog1py_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_zeta_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_sub_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_tan_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_to_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_trunc_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_vsplit_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_bitwise_or_cuda_int64, test/test_ops.py::TestFakeTensorCUDA::test_fake_block_diag_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_chalf_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_copysign_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp__unsafe_masked_index_put_accumulate_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_amin_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_as_strided_scatter_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_cholesky_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_cov_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_diff_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_fft_fftn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_fft_rfft_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_frexp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_index_reduce_amax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_linalg_cholesky_ex_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_linalg_diagonal_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_linalg_lu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_linalg_svd_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_log_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_lu_unpack_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_masked_amax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_masked_amin_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_masked_fill_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_matmul_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_msort_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_avg_pool1d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_avg_pool3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_bilinear_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_channel_shuffle_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_logsigmoid_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_max_unpool2d_grad_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_pdist_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_pixel_unshuffle_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_relu6_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_softmin_with_dtype_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_polar_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_prod_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_round_decimals_neg_3_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_sigmoid_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_special_i0e_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_std_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_take_along_dim_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_tensordot_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_tile_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_unbind_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_unsafe_chunk_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_add_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_atan2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_corrcoef_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_diag_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_diagonal_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_erf_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_fft_fftshift_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_i0_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_index_reduce_amax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_index_reduce_prod_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_linalg_eigvalsh_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_linalg_lu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_linalg_qr_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_linalg_tensorinv_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_log_softmax_with_dtype_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_masked_fill_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_masked_median_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_masked_select_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_max_reduction_with_dim_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_min_reduction_no_dim_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_mm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_mv_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nanmedian_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_native_layer_norm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_adaptive_avg_pool2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_adaptive_avg_pool3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_adaptive_max_pool3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_avg_pool1d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_avg_pool2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_conv2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_hinge_embedding_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_l1_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_margin_ranking_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_max_unpool3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_normalize_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_prelu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_relu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_permute_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_resolve_conj_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_resolve_neg_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_sinc_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_split_with_sizes_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_squeeze_multiple_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_svd_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_trace_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_var_mean_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_vstack_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_div_no_rounding_mode_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_empty_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_empty_strided_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_erfc_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_exp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_fft_ifftn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_flip_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_fmax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_index_reduce_mean_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_jiterator_unary_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_lcm_cuda_int64, test/test_ops.py::TestFakeTensorCUDA::test_fake_le_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_linalg_eig_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_linalg_ldl_solve_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_linalg_solve_ex_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_linalg_solve_triangular_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_log2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_matmul_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_meshgrid_list_of_tensors_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_mm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_mv_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nan_to_num_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_narrow_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_adaptive_avg_pool2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_alpha_dropout_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_channel_shuffle_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_conv3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_cosine_embedding_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_dropout2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_feature_alpha_dropout_with_train_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_max_unpool1d_grad_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_silu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_normal_number_mean_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_quantile_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_randint_like_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_randn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_scatter_add_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_scatter_reduce_mean_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_sigmoid_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_signal_windows_exponential_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_sin_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_softmax_with_dtype_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_sort_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_special_hermite_polynomial_h_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_special_modified_bessel_i1_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_special_modified_bessel_k1_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_triangular_solve_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_triu_indices_cuda_int64, test/test_ops.py::TestFakeTensorCUDA::test_fake_unfold_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_where_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops__softmax_backward_data_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_abs_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_all_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_angle_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_as_strided_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_baddbmm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_bfloat16_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_bitwise_left_shift_cuda_int64, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_clamp_max_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_cummax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_diag_embed_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_diff_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_fft_fft2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_fft_ihfftn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_flipud_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_frexp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_full_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_geqrf_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_gradient_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_histc_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_igammac_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_imag_cuda_complex64, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_index_put_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_index_reduce_amin_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_index_reduce_prod_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_isnan_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_jiterator_4inputs_with_extra_args_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linalg_cholesky_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linalg_ldl_solve_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linalg_vecdot_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linspace_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linspace_tensor_overload_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_logical_and_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_masked_logsumexp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_masked_median_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_masked_norm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_matmul_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_mode_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_narrow_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_native_dropout_backward_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_new_empty_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_new_zeros_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_norm_fro_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_ones_like_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_polygamma_polygamma_n_4_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_put_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_reshape_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_rot90_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_select_scatter_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_sign_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_signal_windows_blackman_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_softmax_with_dtype_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_squeeze_multiple_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_t_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_tanh_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_tensor_split_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_trapz_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_unravel_index_cuda_int64, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_unsafe_chunk_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_arange_cuda_uint8, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_linspace_cuda_complex128, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_linspace_cuda_complex64, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_linspace_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_linspace_tensor_overload_cuda_int64, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_ones_cuda_int32, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_zeros_cuda_float16, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_zeros_cuda_uint8, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_full_cuda_bfloat16, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_logspace_cuda_complex64, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_logspace_tensor_overload_cuda_int16, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_logspace_tensor_overload_cuda_int32, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_ones_cuda_bfloat16, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_zeros_cuda_int16, test/test_ops.py::TestTagsCUDA::test_tags___rdiv___cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags___rxor___cuda_int64, test/test_ops.py::TestTagsCUDA::test_tags__refs__conversions_chalf_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_as_strided_copy_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_atan_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_broadcast_shapes_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_bucketize_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_diagonal_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_empty_strided_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_equal_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_expand_copy_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_expm1_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_fft_fft_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_fft_ihfftn_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_fill_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_floor_divide_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_gcd_cuda_int64, test/test_ops.py::TestTagsCUDA::test_tags__refs_i0_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_lcm_cuda_int64, test/test_ops.py::TestTagsCUDA::test_tags__refs_logical_and_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_masked_fill_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nan_to_num_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_ne_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nn_functional_elu_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nn_functional_glu_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nn_functional_l1_loss_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nn_functional_layer_norm_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nn_functional_prelu_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nn_functional_relu6_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nn_functional_threshold_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_ones_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_pow_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_reciprocal_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_softmax_with_dtype_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_special_ndtri_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_square_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_std_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_tril_indices_cuda_int64, test/test_ops.py::TestTagsCUDA::test_tags__refs_triu_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_vdot_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_view_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__segment_reduce_offsets_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_addcmul_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_addmv_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_argwhere_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_block_diag_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_cholesky_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_column_stack_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_contiguous_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_cross_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_diag_embed_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_dot_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_expand_copy_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_expand_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_fft_fftshift_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_fft_hfftn_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_flatten_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_grid_sampler_2d_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_index_reduce_amax_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_istft_cuda_complex64, test/test_ops.py::TestTagsCUDA::test_tags_jiterator_unary_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_lcm_cuda_int64, test/test_ops.py::TestTagsCUDA::test_tags_linalg_eig_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_linalg_pinv_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_logdet_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_logical_and_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_lu_solve_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_masked_amin_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_max_pool2d_with_indices_backward_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_max_reduction_with_dim_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nan_to_num_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nanmedian_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_narrow_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_native_dropout_backward_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_ne_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nextafter_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_batch_norm_without_cudnn_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_bilinear_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_binary_cross_entropy_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_feature_alpha_dropout_without_train_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_grid_sample_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_hardtanh_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_interpolate_nearest-exact_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_leaky_relu_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_max_unpool1d_grad_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_pad_replicate_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_prelu_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nonzero_static_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_norm_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_normal_number_mean_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_randint_like_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_resolve_conj_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_scatter_reduce_mean_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_scatter_reduce_sum_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_softmax_with_dtype_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_special_hermite_polynomial_h_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_special_zeta_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_split_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_split_with_sizes_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_std_mean_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_trapezoid_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_triangular_solve_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_unbind_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_unique_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_var_mean_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_view_cuda_float32 2024-08-07T18:24:00.1370111Z 2024-08-07T18:24:03.8225137Z Running test_decomp 1/19 ... [2024-08-07 18:24:03.822000] 2024-08-07T18:24:03.8229010Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_decomp.py', '-m', 'not serial', '--shard-id=1', '--num-shards=19', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:24:03.822482] 2024-08-07T18:26:49.2859856Z 2024-08-07T18:26:49.2863128Z test_ops 7/11 was successful, full logs can be found in artifacts with path test/test-reports/test_ops_7.11_83a4b96c49e2cadd_.log 2024-08-07T18:26:49.4055472Z Running 3000 items in this shard: test/test_ops.py::TestCommonCUDA::test_compare_cpu___ror___cuda_int64, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs__conversions_bool_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs__conversions_half_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs__conversions_int_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_addcdiv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_addcmul_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_alias_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_diagonal_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_empty_strided_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_index_add_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_index_fill_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_log_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_logaddexp2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu__refs_new_ones_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_as_strided_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_as_strided_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_bfloat16_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_copysign_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_div_trunc_rounding_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_expand_as_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_fft_fftshift_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_fft_ifftshift_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_flip_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_full_like_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_hstack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_hypot_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_index_put_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_index_reduce_prod_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_int_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_linalg_cond_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_linalg_eigvals_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_linalg_pinv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_linalg_slogdet_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_logaddexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_logcumsumexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_lu_solve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_lu_unpack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_masked_logaddexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_masked_normalize_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_mul_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_adaptive_avg_pool3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_batch_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_dropout3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_interpolate_linear_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_interpolate_nearest_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_leaky_relu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_linear_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_margin_ranking_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_max_unpool2d_grad_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_multilabel_margin_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_normalize_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_nn_functional_pixel_unshuffle_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_put_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_rot90_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_scatter_add_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_select_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_sparse_mm_reduce_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_special_chebyshev_polynomial_u_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_special_legendre_polynomial_p_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_special_shifted_chebyshev_polynomial_v_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_std_unbiased_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_take_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_tensordot_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_view_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_xlogy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_compare_cpu_zeros_cuda_float32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_T_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_alias_copy_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_cfloat_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_conj_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_diagonal_copy_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_eq_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_fft_fftshift_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_fill_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_index_put_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_index_select_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_lerp_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_long_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_neg_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_positive_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_resolve_neg_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_sgn_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_sin_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_split_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_trace_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_complex_half_reference_testing_zeros_like_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_dtypes__refs__conversions_bool_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs__conversions_float_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs__conversions_long_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_addcmul_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_bitwise_and_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_conj_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_conj_physical_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_contiguous_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_div_no_rounding_mode_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_exp2_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_eye_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_fft_fft_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_fft_ifftshift_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_fft_ihfftn_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_fft_irfftn_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_flatten_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_isnan_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_isneginf_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_item_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_linspace_tensor_overload_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_logspace_tensor_overload_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_nn_functional_dropout_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_nn_functional_hardshrink_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_ones_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_rad2deg_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_special_bessel_j0_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_special_multigammaln_mvlgamma_p_3_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_t_copy_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_vdot_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__refs_view_copy_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes__unsafe_masked_index_put_accumulate_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_allclose_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_aminmax_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_as_strided_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_bfloat16_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_bitwise_and_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_ceil_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_corrcoef_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_cov_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_cummax_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_cummin_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_cumprod_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_diff_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_fft_rfftn_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_fill_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_flatten_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_full_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_gather_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_histogramdd_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_imag_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_index_add_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_index_reduce_amin_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_int_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_isposinf_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_linalg_det_singular_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_linalg_eigvals_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_linalg_eigvalsh_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_linalg_solve_triangular_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_linalg_vander_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_logical_and_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_mT_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_masked_norm_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_matmul_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nansum_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_narrow_copy_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_adaptive_max_pool1d_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_bilinear_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_celu_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_feature_alpha_dropout_without_train_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_fractional_max_pool2d_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_gelu_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_hardtanh_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_hinge_embedding_loss_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_interpolate_area_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_l1_loss_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_max_unpool3d_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_prelu_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_relu_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_scaled_dot_product_attention_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_silu_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_soft_margin_loss_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_softmin_with_dtype_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_softplus_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_nn_functional_triplet_margin_with_distance_loss_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_ones_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_pinverse_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_polygamma_polygamma_n_3_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_put_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_randn_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_round_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_scatter_reduce_amin_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_signal_windows_general_hamming_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_sin_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_sinc_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_special_bessel_y0_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_special_i1e_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_sqrt_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_std_unbiased_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_sub_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_tile_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_torch_ops_aten__efficient_attention_forward_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_unique_consecutive_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_var_mean_unbiased_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_vsplit_cuda, test/test_ops.py::TestCommonCUDA::test_dtypes_vstack_cuda, test/test_ops.py::TestCommonCUDA::test_errors___rsub___cuda, test/test_ops.py::TestCommonCUDA::test_errors_amin_cuda, test/test_ops.py::TestCommonCUDA::test_errors_bucketize_cuda, test/test_ops.py::TestCommonCUDA::test_errors_cat_cuda, test/test_ops.py::TestCommonCUDA::test_errors_diag_cuda, test/test_ops.py::TestCommonCUDA::test_errors_fft_fft_cuda, test/test_ops.py::TestCommonCUDA::test_errors_fft_ifft_cuda, test/test_ops.py::TestCommonCUDA::test_errors_fft_irfftn_cuda, test/test_ops.py::TestCommonCUDA::test_errors_fliplr_cuda, test/test_ops.py::TestCommonCUDA::test_errors_float_power_cuda, test/test_ops.py::TestCommonCUDA::test_errors_gather_cuda, test/test_ops.py::TestCommonCUDA::test_errors_jiterator_binary_cuda, test/test_ops.py::TestCommonCUDA::test_errors_linalg_lstsq_cuda, test/test_ops.py::TestCommonCUDA::test_errors_nn_functional_gaussian_nll_loss_cuda, test/test_ops.py::TestCommonCUDA::test_errors_nn_functional_margin_ranking_loss_cuda, test/test_ops.py::TestCommonCUDA::test_errors_pow_cuda, test/test_ops.py::TestCommonCUDA::test_errors_signal_windows_hamming_cuda, test/test_ops.py::TestCommonCUDA::test_errors_sparse_mul_layout4_cuda, test/test_ops.py::TestCommonCUDA::test_errors_special_hermite_polynomial_he_cuda, test/test_ops.py::TestCommonCUDA::test_errors_sum_to_size_cuda, test/test_ops.py::TestCommonCUDA::test_errors_trace_cuda, test/test_ops.py::TestCommonCUDA::test_errors_view_cuda, test/test_ops.py::TestCommonCUDA::test_errors_xlogy_cuda, test/test_ops.py::TestCommonCUDA::test_multiple_devices___radd___cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices___rdiv___cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices___rmatmul___cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices___rsub___cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices__chunk_cat_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices__native_batch_norm_legit_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices__unsafe_masked_index_put_accumulate_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_addbmm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_amin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_aminmax_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_as_strided_copy_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_as_strided_scatter_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_broadcast_shapes_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_broadcast_tensors_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_broadcast_to_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_constant_pad_nd_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_count_nonzero_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_cov_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_cumprod_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_cumsum_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_cumulative_trapezoid_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_double_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_empty_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_empty_permuted_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_fft_fftshift_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_fft_ifftshift_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_fft_irfft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_float_power_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_fmin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_geometric_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_histc_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_hsplit_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_index_add_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_index_reduce_amin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_int_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_isin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_jiterator_4inputs_with_extra_args_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_le_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_lgamma_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_linalg_lstsq_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_linalg_matrix_rank_hermitian_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_linalg_pinv_hermitian_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_logaddexp2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_logical_xor_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_logit_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_logspace_tensor_overload_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_masked_cumprod_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_masked_select_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_mul_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_multinomial_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_mvlgamma_mvlgamma_p_3_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nanmedian_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_narrow_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_new_empty_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_new_empty_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_new_zeros_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_channel_shuffle_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_conv_transpose1d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_instance_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_interpolate_area_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_interpolate_nearest_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_layer_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_max_unpool3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_mish_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_pad_reflect_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_silu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_nn_functional_threshold_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_ones_like_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_permute_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_polygamma_polygamma_n_3_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_pow_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_rand_like_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_real_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_reshape_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_resize__cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_resize__cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_select_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_sgn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_sinh_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_softmax_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_bessel_j0_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_bessel_y0_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_bessel_y0_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_modified_bessel_i1_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_modified_bessel_k0_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_ndtri_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_special_shifted_chebyshev_polynomial_u_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_squeeze_multiple_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_std_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_sub_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_sum_to_size_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_take_along_dim_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_trapz_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_unbind_cuda_int64, test/test_ops.py::TestCommonCUDA::test_multiple_devices_unique_cuda_float32, test/test_ops.py::TestCommonCUDA::test_multiple_devices_where_cuda_int64, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_asin_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_block_diag_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_count_nonzero_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_deg2rad_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_diag_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_diagonal_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_eq_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_erfinv_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_exp2_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_expand_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_fft_ifftn_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_fft_ihfftn_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_fft_rfft2_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_gt_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_half_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_hstack_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_lgamma_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_masked_select_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_min_reduction_with_dim_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_narrow_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_nn_functional_softsign_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_ones_like_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_roll_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_scatter_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_short_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_special_bessel_y0_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_special_hermite_polynomial_he_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_special_i1e_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_special_scaled_modified_bessel_k1_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_squeeze_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_tanh_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_triu_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_unsafe_split_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_unsqueeze_cuda_bool, test/test_ops.py::TestCommonCUDA::test_non_standard_bool_values_vstack_cuda_bool, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples___radd___cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples___rmul___cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples__chunk_cat_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples__softmax_backward_data_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples__unsafe_masked_index_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_addcmul_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_addcmul_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_addmm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_addmm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_addr_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_alias_copy_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_all_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_all_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_argmax_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_argwhere_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_atleast_2d_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_atleast_3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_bfloat16_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_bitwise_left_shift_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_block_diag_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cartesian_prod_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cdouble_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cdouble_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cholesky_inverse_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cholesky_inverse_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_chunk_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_constant_pad_nd_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_constant_pad_nd_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_count_nonzero_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cross_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cummax_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cumsum_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_cumsum_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_diagonal_scatter_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_div_no_rounding_mode_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_div_trunc_rounding_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_double_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_empty_permuted_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_erf_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_erfc_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_erfc_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_expand_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fft_hfft2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_fft_ifft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_flip_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_flip_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_float_power_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_i0_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_index_add_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_index_copy_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_index_reduce_amax_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_index_select_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_isclose_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_isnan_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_kron_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_lgamma_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_cholesky_ex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_cross_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_ldl_factor_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_lu_solve_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_matrix_rank_hermitian_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_norm_subgradients_at_zero_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_pinv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_pinv_singular_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_solve_triangular_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linalg_vecdot_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_linspace_tensor_overload_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_log2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_log_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_logical_or_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_logical_xor_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_logspace_tensor_overload_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_lu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_amin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_argmax_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_fill_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_softmin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_masked_std_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_maximum_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_median_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_minimum_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_mul_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_alpha_dropout_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_conv1d_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_dropout3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_kl_div_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_multilabel_margin_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_pad_circular_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_smooth_l1_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_softmin_with_dtype_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_tanhshrink_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_nn_functional_triplet_margin_with_distance_loss_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_normal_number_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_ones_like_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_polygamma_polygamma_n_0_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_polygamma_polygamma_n_2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_polygamma_polygamma_n_3_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_qr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_rand_like_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_randint_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_randn_like_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_scatter_add_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_scatter_reduce_amax_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_scatter_reduce_sum_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_signal_windows_kaiser_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_bessel_y1_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_chebyshev_polynomial_w_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_entr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_laguerre_polynomial_l_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_polygamma_special_polygamma_n_0_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_scaled_modified_bessel_k1_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_shifted_chebyshev_polynomial_v_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_special_shifted_chebyshev_polynomial_w_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_split_with_sizes_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_svd_lowrank_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_t_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_take_along_dim_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_take_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_tensor_split_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_tile_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_to_sparse_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_transpose_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_tril_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_triu_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_true_divide_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_uniform_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_unique_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_unsqueeze_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_unsqueeze_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_var_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_var_mean_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_view_as_real_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_vsplit_cuda_int64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_vstack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_where_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_zero__cuda_float32, test/test_ops.py::TestCommonCUDA::test_noncontiguous_samples_zeros_like_cuda_int64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_argwhere_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_numpy_ref_cat_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_diag_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_diff_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_numpy_ref_equal_cuda_int64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_jiterator_4inputs_with_extra_args_cuda_int64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_linalg_cross_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_nn_functional_conv_transpose1d_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_numpy_ref_nn_functional_conv_transpose3d_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_nn_functional_group_norm_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_nn_functional_rms_norm_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_searchsorted_cuda_float64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_tensor_split_cuda_int64, test/test_ops.py::TestCommonCUDA::test_numpy_ref_transpose_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_out___rxor___cuda_int64, test/test_ops.py::TestCommonCUDA::test_out__batch_norm_with_update_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs__conversions_double_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_atleast_1d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_bitwise_xor_cuda_int64, test/test_ops.py::TestCommonCUDA::test_out__refs_ceil_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_clamp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_cosh_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_empty_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_exp2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_expand_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_hypot_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_linalg_diagonal_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_log10_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_logaddexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_logical_and_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_narrow_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_nn_functional_log_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_nn_functional_nll_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_nn_functional_relu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_real_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_renorm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_tensor_split_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_trunc_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_unfold_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out__refs_view_as_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_atleast_2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_atleast_3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_broadcast_to_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_clamp_min_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_column_stack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_double_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_equal_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_exp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_expand_as_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_expand_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_expand_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_fft_fftn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_fft_ifft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_flatten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_full_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_heaviside_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_histc_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_hsplit_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_index_reduce_prod_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_integral_dtype__refs_sum_cuda_int16, test/test_ops.py::TestCommonCUDA::test_out_jiterator_unary_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_kron_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_linalg_cholesky_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_linalg_inv_ex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_linalg_vecdot_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_linspace_tensor_overload_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_logical_not_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_logspace_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_masked_sum_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_max_pool2d_with_indices_backward_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_max_reduction_no_dim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_mv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nan_to_num_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_adaptive_max_pool2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_bilinear_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_embedding_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_group_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_interpolate_trilinear_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_local_response_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_multi_head_attention_forward_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_pad_replicate_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_softplus_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_nn_functional_threshold_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_polygamma_polygamma_n_4_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_rad2deg_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_randint_like_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_abs_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_addbmm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_alias_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_atanh_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_ceil_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_diff_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_exp_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_fft_fft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_fft_irfft_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_fft_irfftn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_fmin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_frexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_lu_factor_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_matrix_norm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_norm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_pinv_hermitian_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_solve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_svd_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_linalg_tensorsolve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_logspace_tensor_overload_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_lu_unpack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_max_binary_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_nansum_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_nn_functional_gelu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_norm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_qr_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_square_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_std_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_triangular_solve_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_triu_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_out_requires_grad_error_xlogy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_reshape_as_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_round_decimals_0_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_scalar_tensor_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_sparse_sampled_addmm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_special_airy_ai_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_special_chebyshev_polynomial_t_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_split_with_sizes_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_stack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_sub_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_t_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_tril_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_uniform_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_unique_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_var_unbiased_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_view_as_complex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_view_as_cuda_float32, test/test_ops.py::TestCommonCUDA::test_out_warning___radd___cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__batch_norm_with_update_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_acosh_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_addcdiv_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_amax_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_atan2_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_atleast_2d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_cosh_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_diagonal_copy_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_div_floor_rounding_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_empty_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_eye_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_fft_fft_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_fft_irfft2_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_fft_rfft2_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_floor_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_fmin_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_gcd_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_i0_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_index_fill_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_index_select_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_istft_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_linalg_norm_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_linalg_svdvals_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_log10_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_nn_functional_glu_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_nn_functional_margin_ranking_loss_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_nn_functional_pdist_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_nn_functional_pixel_shuffle_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_norm_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_normal_number_mean_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_special_erfcx_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_special_multigammaln_mvlgamma_p_3_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_special_xlog1py_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_sub_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_tan_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_unflatten_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_vdot_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__refs_vsplit_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__segment_reduce_lengths_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning__segment_reduce_offsets_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_alias_copy_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_any_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_argsort_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_atanh_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_chunk_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_cumsum_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_diagonal_copy_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_eye_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_fft_ihfft_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_flatten_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_frexp_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_half_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_histogram_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_hstack_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_index_fill_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_isneginf_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_kron_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_lerp_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_linalg_cholesky_ex_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_linalg_det_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_linalg_lstsq_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_linalg_lu_factor_ex_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_linalg_pinv_hermitian_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_linalg_solve_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_log2_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_logical_and_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_masked_cumsum_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_meshgrid_list_of_tensors_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_native_batch_norm_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_alpha_dropout_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_avg_pool2d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_binary_cross_entropy_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_conv1d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_conv2d_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_l1_loss_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_leaky_relu_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_max_unpool1d_grad_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_nll_loss_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nn_functional_relu6_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_nonzero_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_polygamma_polygamma_n_4_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_real_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_repeat_interleave_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_scatter_reduce_mean_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_select_scatter_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_signal_windows_blackman_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_signal_windows_general_hamming_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_softmax_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_special_i1_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_split_with_sizes_copy_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_tensordot_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_to_sparse_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_true_divide_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_unfold_copy_cuda, test/test_ops.py::TestCommonCUDA::test_out_warning_unsqueeze_copy_cuda, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_acos_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_acosh_cuda_int16, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_acosh_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_asin_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_asinh_cuda_int16, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_asinh_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_atan2_cuda_bool, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_atan2_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_div_no_rounding_mode_cuda_bool, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_erf_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_erfc_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_exp2_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_ldexp_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_lgamma_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_log_cuda_bool, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_log_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_masked_mean_cuda_bool, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_masked_var_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_mvlgamma_mvlgamma_p_1_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_polygamma_polygamma_n_0_cuda_bool, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_polygamma_polygamma_n_0_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_polygamma_polygamma_n_1_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_polygamma_polygamma_n_2_cuda_int16, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_rsqrt_cuda_int16, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_rsqrt_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_sigmoid_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_sigmoid_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_sin_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_sinh_cuda_int16, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_chebyshev_polynomial_t_cuda_int32, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_chebyshev_polynomial_t_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_chebyshev_polynomial_u_cuda_bool, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_chebyshev_polynomial_v_cuda_bool, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_chebyshev_polynomial_w_cuda_int16, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_hermite_polynomial_he_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_laguerre_polynomial_l_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_shifted_chebyshev_polynomial_v_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_shifted_chebyshev_polynomial_w_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_xlog1py_cuda_int64, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_special_zeta_cuda_int8, test/test_ops.py::TestCommonCUDA::test_promotes_int_to_float_true_divide_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_T_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_bfloat16_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_bfloat16_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_bfloat16_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_byte_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_cdouble_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_chalf_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_chalf_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_chalf_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_chalf_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_char_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_char_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_complex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_double_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_float_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_float_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_half_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_int_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_long_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_long_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_long_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs__conversions_short_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_acos_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_acos_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_acosh_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_add_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_add_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_add_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_addcdiv_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_addr_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_addr_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_alias_copy_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_all_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_arange_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_copy_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_copy_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_partial_views_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_scatter_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_scatter_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_scatter_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_as_strided_scatter_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_asin_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_asin_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_asin_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atleast_2d_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_atleast_2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_bitwise_left_shift_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_bitwise_not_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_bitwise_or_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_bitwise_xor_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_block_diag_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_block_diag_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_broadcast_shapes_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_broadcast_to_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_bucketize_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_cat_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_cat_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ceil_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_chunk_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_clamp_max_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_clamp_max_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_clone_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_column_stack_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_column_stack_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_conj_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_conj_physical_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_conj_physical_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_constant_pad_nd_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_contiguous_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_copysign_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_copysign_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_cos_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_cumsum_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_deg2rad_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diag_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diag_embed_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diag_embed_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diag_embed_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diagonal_copy_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diagonal_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diagonal_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diagonal_scatter_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_diagonal_scatter_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_digamma_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_digamma_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_digamma_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_div_floor_rounding_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_div_no_rounding_mode_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_div_no_rounding_mode_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_div_trunc_rounding_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_div_trunc_rounding_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_dsplit_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_dsplit_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_dsplit_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_dstack_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_dstack_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_empty_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_empty_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_empty_like_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_empty_like_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_eq_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_eq_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_eq_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_erfc_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_erfc_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_exp_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expand_as_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expand_copy_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expand_copy_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expand_copy_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expand_copy_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expm1_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_expm1_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_eye_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_eye_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_fft2_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_fft2_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_fft_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_fftn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_fftshift_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_hfft2_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_hfft2_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifft2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifft_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifft_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifft_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ifftshift_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_ihfft2_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_irfft2_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_irfftn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_irfftn_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_irfftn_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_rfft_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fft_rfft_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fill_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_flatten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_flatten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_flatten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_flip_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fliplr_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_flipud_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_flipud_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_float_power_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_floor_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_floor_divide_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_fmod_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_frac_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_geometric_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_heaviside_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_heaviside_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_hsplit_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_hstack_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_imag_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_index_add_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_index_copy_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_index_copy_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_index_select_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_index_select_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_index_select_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isclose_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isinf_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isposinf_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isposinf_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_isreal_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_item_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_lerp_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_lgamma_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_lgamma_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linalg_diagonal_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linalg_diagonal_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linalg_diagonal_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linalg_diagonal_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linalg_svd_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linspace_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linspace_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linspace_tensor_overload_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_linspace_tensor_overload_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log10_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log1p_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log1p_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log2_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log2_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log_normal_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_log_softmax_with_dtype_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logaddexp2_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logical_and_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logical_and_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logical_not_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logsumexp_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_logsumexp_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_lt_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_meshgrid_variadic_tensors_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_meshgrid_variadic_tensors_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_minimum_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_movedim_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_movedim_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_mul_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_mul_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_mul_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nan_to_num_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nan_to_num_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_narrow_copy_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ne_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_neg_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_empty_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_empty_strided_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_full_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_ones_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_zeros_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_new_zeros_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nextafter_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_celu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_dropout_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_glu_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_glu_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_hardshrink_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_hardshrink_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_hardtanh_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_hinge_embedding_loss_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_huber_loss_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_l1_loss_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_l1_loss_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_layer_norm_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_log_softmax_with_dtype_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_margin_ranking_loss_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_margin_ranking_loss_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_margin_ranking_loss_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_mish_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_mish_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_nll_loss_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_pdist_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_pdist_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_pixel_shuffle_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_poisson_nll_loss_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_poisson_nll_loss_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_relu6_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_selu_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_softmax_with_dtype_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_tanhshrink_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_threshold_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_nn_functional_triplet_margin_loss_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_normal_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ones_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ones_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_permute_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_positive_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_pow_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_pow_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_prod_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_rad2deg_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_rad2deg_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_ravel_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_real_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_real_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_renorm_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_reshape_as_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_reshape_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_roll_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_roll_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_rot90_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_rsqrt_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_select_scatter_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sigmoid_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sigmoid_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sign_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_signbit_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sin_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sin_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sinh_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sinh_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sinh_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sinh_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_bessel_j0_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_bessel_j0_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_bessel_j0_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_bessel_j1_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_entr_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_erfcx_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_i1_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_i1_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_i1e_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_i1e_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_logit_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_multigammaln_mvlgamma_p_5_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_special_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_square_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_squeeze_multiple_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_squeeze_multiple_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_stack_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_stack_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sub_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_sum_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_take_along_dim_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_tensor_split_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_triu_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_true_divide_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_trunc_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unbind_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unbind_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unfold_copy_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unfold_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unfold_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unsqueeze_copy_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_unsqueeze_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_var_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_vdot_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_copy_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_copy_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_view_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_vsplit_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_vsplit_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_where_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_where_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref__refs_xlogy_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_bitwise_and_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_copysign_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_dot_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_dsplit_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_fft_fft2_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_fft_fftn_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_fft_hfft_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_fft_ihfft2_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_fmod_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_index_select_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_item_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_linalg_cross_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_movedim_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_nn_functional_l1_loss_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_special_xlog1py_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_t_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_errors__refs_trace_cuda, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_T_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_bool_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_bool_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_cdouble_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_cdouble_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_cdouble_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_cdouble_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_cfloat_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_cfloat_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_char_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_int_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_long_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_long_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_long_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_short_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_short_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_short_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_short_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs__conversions_short_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_acosh_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_acosh_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_acosh_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_add_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_add_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_addcdiv_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_addcdiv_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_amin_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_amin_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_any_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_any_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_any_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_arange_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_arange_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_arange_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_scatter_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_scatter_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_as_strided_scatter_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_asin_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_asinh_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_asinh_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atan2_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atan_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atan_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atanh_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atanh_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atleast_2d_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atleast_3d_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atleast_3d_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_atleast_3d_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_bitwise_left_shift_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_block_diag_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_block_diag_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_tensors_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_tensors_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_tensors_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_to_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_to_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_broadcast_to_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_bucketize_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_chunk_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_chunk_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_clamp_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_clamp_max_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_clone_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_column_stack_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_column_stack_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_conj_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_conj_physical_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_conj_physical_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_conj_physical_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_contiguous_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_contiguous_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cos_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cos_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cosh_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cosh_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_cosh_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_count_nonzero_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_count_nonzero_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diag_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diag_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diag_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diagonal_copy_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diagonal_copy_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diagonal_copy_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diagonal_scatter_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diagonal_scatter_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_diagonal_scatter_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_div_no_rounding_mode_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_div_no_rounding_mode_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_div_trunc_rounding_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_dsplit_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_dstack_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_empty_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_empty_strided_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_empty_strided_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_erfinv_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_exp2_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_exp_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_expand_as_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_expand_copy_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_expand_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_eye_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_fft2_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_fft_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_fftn_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_fftshift_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_hfft_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ifft_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ifftshift_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ihfft2_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ihfft_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_ihfftn_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_irfft2_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_rfft2_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_rfft_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_rfft_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fft_rfftn_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_flatten_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_flatten_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_flatten_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_flip_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_flip_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fliplr_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_float_power_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_float_power_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_floor_divide_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_floor_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fmax_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fmax_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_fmod_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_frexp_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_frexp_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_frexp_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_gcd_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_gcd_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_gcd_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ge_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ge_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ge_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_heaviside_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_i0_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_igammac_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_imag_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_index_fill_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_index_select_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_index_select_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_index_select_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isclose_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isfinite_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isinf_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isnan_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isneginf_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isposinf_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isposinf_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isposinf_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isreal_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_isreal_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_lcm_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_le_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_lerp_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_linalg_cross_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_linalg_svd_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log10_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log10_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log10_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log1p_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log2_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log_normal_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_log_softmax_with_dtype_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logical_and_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logical_not_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logspace_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logspace_tensor_overload_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logspace_tensor_overload_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logspace_tensor_overload_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_logsumexp_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_masked_fill_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_mean_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_mean_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_meshgrid_variadic_tensors_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_minimum_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_movedim_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_mul_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_narrow_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_narrow_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ne_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ne_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_new_empty_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_new_empty_strided_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_new_empty_strided_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_new_ones_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_gelu_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_hardtanh_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_huber_loss_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_l1_loss_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_leaky_relu_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_log_softmax_with_dtype_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_mish_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_mse_loss_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_pairwise_distance_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_pixel_unshuffle_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_pixel_unshuffle_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_poisson_nll_loss_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_softplus_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_softplus_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_nn_functional_tanhshrink_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_normal__in_place_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_normal_number_mean_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ones_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ones_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_permute_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_positive_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_pow_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_pow_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_prod_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_prod_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_ravel_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_real_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_real_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_remainder_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_reshape_as_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_reshape_as_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_reshape_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_roll_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_roll_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_roll_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_round_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_rsub_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_rsub_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_select_scatter_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sgn_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sigmoid_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sigmoid_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sigmoid_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sin_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sinh_executor_aten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sinh_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_bessel_j0_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_entr_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_erfcx_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_i0e_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_i1_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_i1_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_i1e_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_logit_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_logit_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_logit_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_spherical_bessel_j0_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_xlog1py_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_special_zeta_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_square_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_square_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_squeeze_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_squeeze_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_std_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_std_mean_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sub_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sub_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sum_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sum_to_size_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sum_to_size_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_sum_to_size_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_t_copy_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_take_along_dim_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_tanh_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_tanh_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_tensor_split_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_to_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_trace_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_trace_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_trace_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_tril_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_tril_indices_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_triu_executor_aten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_triu_indices_executor_aten_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_triu_indices_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_true_divide_executor_aten_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_trunc_executor_aten_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unbind_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unfold_executor_aten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_unsqueeze_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_var_executor_aten_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_var_executor_aten_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_vsplit_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_vsplit_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_where_executor_aten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_xlogy_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_xlogy_executor_aten_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_zeros_executor_aten_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_zeros_executor_aten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_executor__refs_zeros_executor_aten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_T_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_T_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_cfloat_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_cfloat_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_chalf_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_char_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_complex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_double_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_float_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_half_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_half_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_long_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs__conversions_short_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_add_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_add_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_addcdiv_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_addcdiv_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_addcmul_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_addcmul_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_addcmul_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_addr_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_addr_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_alias_copy_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_all_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_allclose_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_allclose_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_amax_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_amax_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_amin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_any_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_any_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_partial_views_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_partial_views_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_partial_views_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_as_strided_partial_views_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_asinh_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_asinh_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atan_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atan_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atanh_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atanh_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_atleast_3d_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_bitwise_or_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_bitwise_right_shift_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_broadcast_tensors_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_broadcast_to_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cat_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_ceil_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_chunk_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_chunk_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_chunk_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_clamp_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_clamp_max_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_clamp_min_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_clone_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_clone_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_conj_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_conj_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_conj_physical_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_constant_pad_nd_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_contiguous_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cosh_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cumprod_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cumprod_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_cumprod_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_deg2rad_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_deg2rad_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_deg2rad_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diag_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diag_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diag_embed_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diag_embed_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diagonal_copy_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diagonal_copy_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diagonal_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diagonal_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diagonal_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_diagonal_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_digamma_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_div_trunc_rounding_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_dstack_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_like_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_strided_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_empty_strided_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_equal_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_erf_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_erfinv_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_erfinv_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_erfinv_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_exp2_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_exp_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expand_as_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expand_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_expand_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_eye_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_fft2_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_fftn_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_fftshift_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_hfft_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_hfftn_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_hfftn_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ifft2_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ifft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ifft_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ifftshift_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ifftshift_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_ifftshift_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fft_irfftn_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flatten_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flatten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flatten_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flip_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flip_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flip_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fliplr_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flipud_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_flipud_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_float_power_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_floor_divide_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fmax_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fmin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fmod_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fmod_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_fmod_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_frexp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_ge_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_geometric_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_gt_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_heaviside_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_heaviside_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_hsplit_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_hsplit_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_hstack_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_hypot_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_add_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_add_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_copy_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_copy_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_fill_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_fill_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_fill_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_select_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_select_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_index_select_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isclose_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isclose_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isfinite_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isfinite_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isinf_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isnan_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isposinf_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_isreal_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_item_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_le_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_lgamma_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linalg_cross_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linalg_diagonal_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linalg_diagonal_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linalg_diagonal_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linalg_svd_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linalg_svd_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linspace_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linspace_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_linspace_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_log10_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_log2_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_log_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_log_softmax_with_dtype_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_log_softmax_with_dtype_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logical_and_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logical_xor_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logical_xor_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logical_xor_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logspace_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_logspace_tensor_overload_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_masked_fill_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_masked_fill_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_meshgrid_list_of_tensors_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_meshgrid_variadic_tensors_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_meshgrid_variadic_tensors_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_movedim_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_narrow_copy_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_narrow_copy_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_narrow_copy_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_narrow_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_neg_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_alpha_dropout_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_dropout_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_dropout_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_glu_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_hardshrink_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_hinge_embedding_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_l1_loss_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_log_softmax_with_dtype_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_margin_ranking_loss_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_mse_loss_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_pairwise_distance_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_pairwise_distance_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_pairwise_distance_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_pixel_unshuffle_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_pixel_unshuffle_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_poisson_nll_loss_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_relu_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_relu_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_relu_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_softmax_with_dtype_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_softmin_with_dtype_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_softmin_with_dtype_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_softshrink_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_softshrink_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_tanhshrink_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_threshold_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_triplet_margin_loss_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_nn_functional_triplet_margin_loss_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_norm_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_normal__in_place_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_normal_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_ones_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_ones_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_permute_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_permute_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_permute_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_prod_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_rad2deg_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_ravel_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_real_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_renorm_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_reshape_as_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_reshape_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_reshape_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_reshape_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_rot90_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_rsqrt_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_rsqrt_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_rsub_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sgn_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sgn_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sgn_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sign_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sign_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_signbit_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sin_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sin_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_softmax_with_dtype_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_bessel_j0_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_bessel_j1_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_erfcx_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_i0e_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_i0e_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_i0e_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_log_ndtr_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_logit_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_logit_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_multigammaln_mvlgamma_p_1_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_multigammaln_mvlgamma_p_1_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_multigammaln_mvlgamma_p_3_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_multigammaln_mvlgamma_p_5_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_ndtr_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_special_ndtr_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sqrt_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_square_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_square_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_squeeze_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_squeeze_multiple_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_squeeze_multiple_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_stack_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_std_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sum_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sum_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_sum_to_size_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_t_copy_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_t_copy_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_take_along_dim_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_tanh_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_tensor_split_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_to_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_to_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_trace_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_tril_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unbind_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unbind_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unflatten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_unsqueeze_copy_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_view_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_view_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_view_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_vstack_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_where_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_meta__refs_zeros_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_T_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_T_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_bfloat16_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_bfloat16_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_byte_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_cdouble_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_cfloat_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_cfloat_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_cfloat_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_cfloat_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_chalf_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_chalf_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_chalf_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_char_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_char_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_double_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_double_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_float_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_float_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_half_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_int_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_long_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_polar_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_short_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs__conversions_short_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_acos_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_acos_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_add_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_add_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_add_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_addcdiv_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_addr_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_all_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_all_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_all_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_amax_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_amax_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_amax_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_amin_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_amin_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_any_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_arange_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_as_strided_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_as_strided_partial_views_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_as_strided_partial_views_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_as_strided_partial_views_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_as_strided_partial_views_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_asin_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_asinh_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atan2_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atan_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atan_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atanh_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_atleast_1d_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_bitwise_left_shift_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_bitwise_not_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_bitwise_or_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_bitwise_xor_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_block_diag_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_block_diag_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_bucketize_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_bucketize_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_cat_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_cauchy_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_ceil_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_clamp_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_clamp_max_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_clone_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_column_stack_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_column_stack_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_constant_pad_nd_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_contiguous_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_copysign_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_copysign_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_cos_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_cos_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_count_nonzero_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_cumprod_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_cumsum_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_deg2rad_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diag_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diag_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diag_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diag_embed_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diagonal_copy_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diagonal_copy_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diagonal_copy_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diagonal_copy_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diagonal_scatter_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_diagonal_scatter_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_digamma_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_div_no_rounding_mode_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_dot_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_dsplit_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_empty_like_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_empty_like_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_empty_strided_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_erfc_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_exp2_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_as_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_as_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_copy_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_copy_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expand_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expm1_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_expm1_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_fft2_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_fft_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_fft_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_fft_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_fftn_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_hfft2_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_hfftn_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ifft2_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ifft_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ifft_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ifftn_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ifftshift_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ifftshift_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_ihfftn_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_irfft2_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_irfft_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_irfftn_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_rfft_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fft_rfftn_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fill_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fill_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_flatten_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_flatten_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_flatten_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fliplr_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fliplr_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_flipud_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_float_power_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_float_power_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_floor_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_floor_divide_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_floor_divide_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmax_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmin_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmin_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_fmod_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_geometric_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_geometric_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_hsplit_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_imag_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_index_copy_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_index_fill_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_index_fill_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_index_select_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_index_select_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isfinite_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isfinite_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isinf_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isinf_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isnan_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isnan_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isneginf_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isneginf_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isneginf_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_isposinf_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_item_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_item_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_le_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_lerp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_diagonal_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_diagonal_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_diagonal_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_matrix_norm_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_norm_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_svdvals_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linalg_vector_norm_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linspace_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linspace_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_linspace_tensor_overload_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log10_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log10_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log10_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log2_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log_softmax_with_dtype_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_log_softmax_with_dtype_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logical_not_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logical_not_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logical_or_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logical_or_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logical_xor_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_logspace_tensor_overload_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_lt_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_masked_fill_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_mean_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_mean_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_meshgrid_list_of_tensors_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_meshgrid_variadic_tensors_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_minimum_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_minimum_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_movedim_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_mul_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_narrow_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_narrow_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_narrow_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_narrow_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_ne_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_ne_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_empty_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_empty_strided_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_empty_strided_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_empty_strided_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_zeros_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_new_zeros_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_celu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_channel_shuffle_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_dropout_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_hardshrink_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_l1_loss_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_leaky_relu_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_margin_ranking_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_margin_ranking_loss_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_mse_loss_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_pairwise_distance_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_pixel_shuffle_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_pixel_unshuffle_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_pixel_unshuffle_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_prelu_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_relu6_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_relu6_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_relu_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_selu_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_softmax_with_dtype_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_softmax_with_dtype_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_softplus_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_tanhshrink_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_tanhshrink_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_tanhshrink_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_triplet_margin_loss_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_nn_functional_triplet_margin_loss_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_normal__in_place_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_normal__in_place_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_normal_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_normal_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_ones_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_positive_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_positive_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_pow_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_pow_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_pow_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_prod_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_randn_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_ravel_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_ravel_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_real_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_real_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_reciprocal_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_renorm_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_repeat_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_reshape_as_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_reshape_as_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_reshape_as_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_reshape_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_reshape_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_rot90_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_round_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_rsqrt_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_rsqrt_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_rsub_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_rsub_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sgn_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sigmoid_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sigmoid_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sigmoid_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sigmoid_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sign_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sign_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sinc_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sinc_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sinh_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sinh_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_bessel_j1_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_entr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_erfcx_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_log_softmax_with_dtype_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_multigammaln_mvlgamma_p_1_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_multigammaln_mvlgamma_p_1_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_ndtr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_ndtr_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_ndtr_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_softmax_with_dtype_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_special_softmax_with_dtype_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sqrt_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_squeeze_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_squeeze_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_squeeze_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_stack_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_stack_cuda_int32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_std_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_std_mean_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sub_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sub_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sum_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_sum_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_t_copy_cuda_complex128, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_t_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tanh_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_trace_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_tril_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_triu_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_triu_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_triu_indices_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_true_divide_cuda_int16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unbind_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unbind_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unflatten_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unfold_copy_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unfold_copy_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unfold_cuda_uint8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unsqueeze_copy_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unsqueeze_cuda_float64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_unsqueeze_cuda_int8, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_view_as_complex_cuda_float32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_view_as_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_view_as_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_view_as_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_view_as_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_view_copy_cuda_bfloat16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_view_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_view_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_where_cuda_float16, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_where_cuda_int64, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_zeros_cuda_bool, test/test_ops.py::TestCommonCUDA::test_python_ref_torch_fallback__refs_zeros_cuda_complex32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_abs_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_argmax_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_argwhere_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_asinh_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_atleast_1d_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_bfloat16_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_block_diag_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_cartesian_prod_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_char_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_chunk_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_clamp_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_combinations_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_constant_pad_nd_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_corrcoef_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_cos_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_diag_embed_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_diagonal_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_dist_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_div_floor_rounding_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_div_trunc_rounding_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_dot_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_empty_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_empty_strided_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_exp_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_fft_ifftshift_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_fft_irfftn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_fft_rfft_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_fliplr_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_float_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_float_power_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_floor_divide_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_gather_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_gather_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_igammac_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_isclose_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_isfinite_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_isnan_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_item_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_jiterator_2inputs_2outputs_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_jiterator_binary_return_by_ref_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_ldl_factor_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_lu_factor_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_matrix_norm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_norm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_norm_subgradients_at_zero_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_qr_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_tensorsolve_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_linalg_vector_norm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_log10_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_logical_or_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_masked_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_masked_var_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_mean_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_min_binary_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_movedim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_mul_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_adaptive_max_pool2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_batch_norm_without_cudnn_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_celu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_cosine_embedding_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_dropout2d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_embedding_bag_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_max_pool3d_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_mse_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_nll_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_pdist_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_relu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_rms_norm_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_silu_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_soft_margin_loss_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_softsign_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_nn_functional_tanhshrink_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_normal_number_mean_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_qr_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_renorm_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_reshape_as_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_reshape_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_resize__cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_resolve_conj_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_scalar_tensor_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_scatter_reduce_amin_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_short_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_sigmoid_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_sign_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_slice_scatter_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_special_erfcx_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_special_scaled_modified_bessel_k0_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_split_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_squeeze_multiple_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_squeeze_multiple_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_t_copy_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_take_along_dim_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_take_along_dim_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_tan_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_tanh_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_tile_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_trace_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_trace_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_trapz_cuda_complex64, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_true_divide_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_unbind_cuda_float32, test/test_ops.py::TestCommonCUDA::test_variant_consistency_eager_zero__cuda_complex64, test/test_ops.py::TestCompositeComplianceCUDA::test_backward___getitem___cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward___rmod___cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward__native_batch_norm_legit_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward__upsample_bilinear2d_aa_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_add_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_atanh_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_clamp_min_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_column_stack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_copysign_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_deg2rad_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_diagonal_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_double_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_exp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_fft_hfft_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_flip_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_fliplr_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_hstack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_lgamma_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_linalg_cross_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_linalg_inv_ex_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_linalg_matrix_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_linalg_pinv_singular_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_log1p_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_log2_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_log_softmax_with_dtype_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_lu_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_mH_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_masked_std_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_mvlgamma_mvlgamma_p_5_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nanmean_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_native_dropout_backward_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_conv_transpose1d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_embedding_bag_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_pairwise_distance_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_softplus_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_nn_functional_triplet_margin_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_positive_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_ravel_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_remainder_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_select_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_sign_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_slice_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_sum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_sum_to_size_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_to_sparse_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_backward_trace_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input___rsub___cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_argwhere_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_as_strided_partial_views_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_cfloat_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_cummin_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_cumprod_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_dist_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_gradient_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_histc_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_hstack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_index_add_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_index_reduce_prod_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_index_select_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_linalg_cholesky_ex_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_linalg_lstsq_grad_oriented_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_linalg_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_linalg_vector_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_logcumsumexp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_mH_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_mT_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_masked_sum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_matrix_exp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_max_binary_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_min_reduction_with_dim_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_mm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_msort_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_new_empty_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_new_full_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_adaptive_avg_pool1d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_adaptive_avg_pool2d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_avg_pool2d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_conv_transpose2d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_gelu_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_interpolate_nearest-exact_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_linear_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_max_unpool1d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_multi_margin_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_nn_functional_upsample_bilinear_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_norm_inf_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_ones_like_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_repeat_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_resolve_neg_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_roll_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_rot90_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_rsqrt_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_signal_windows_kaiser_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_special_bessel_j0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_special_modified_bessel_i0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_special_shifted_chebyshev_polynomial_w_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_sub_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_to_sparse_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_topk_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_trapezoid_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_trapz_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_trunc_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_cow_input_unbind_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_abs_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_acos_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_ceil_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_cholesky_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_count_nonzero_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_dstack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_empty_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_expm1_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_fft_fft2_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_fft_rfft2_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_flatten_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_flip_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_fmax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_full_like_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_geqrf_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_hypot_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_index_fill_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_lgamma_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_linalg_ldl_factor_ex_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_linalg_matrix_rank_hermitian_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_logdet_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_logical_not_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_logit_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_long_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_masked_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_matrix_exp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_min_reduction_no_dim_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_minimum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_mv_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_native_dropout_backward_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_native_layer_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_channel_shuffle_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_glu_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_hinge_embedding_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_leaky_relu_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_max_unpool3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_relu6_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nn_functional_upsample_bilinear_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_nonzero_static_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_polygamma_polygamma_n_2_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_remainder_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_resize__cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_round_decimals_0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_rsub_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_sigmoid_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_sinh_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_slice_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_sort_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_special_bessel_j0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_special_chebyshev_polynomial_u_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_special_hermite_polynomial_h_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_square_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_sub_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_t_copy_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_t_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_tile_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_torch_ops_aten__efficient_attention_forward_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_forward_ad_vstack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator__batch_norm_with_update_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator__segment_reduce_lengths_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator__softmax_backward_data_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_abs_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_amin_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_as_strided_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_atan2_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_cat_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_cdouble_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_clamp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_cosh_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_cumsum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_diag_embed_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_diagonal_scatter_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_dot_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_full_like_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_histc_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_hstack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_hypot_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_igamma_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_linalg_ldl_solve_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_logaddexp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_logdet_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_long_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_masked_select_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nansum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_narrow_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_native_layer_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_batch_norm_without_cudnn_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_channel_shuffle_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_conv_transpose2d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_gaussian_nll_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_layer_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_max_unpool3d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_multi_head_attention_forward_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_nn_functional_softshrink_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_randint_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_reshape_as_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_rot90_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_round_decimals_0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_scatter_reduce_sum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_special_bessel_y0_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_special_hermite_polynomial_he_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_special_laguerre_polynomial_l_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_split_list_args_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_stack_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_tensordot_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_triangular_solve_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_operator_unsqueeze_copy_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_add_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_addmm_decomposed_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_addmv_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_bernoulli_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_broadcast_shapes_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_byte_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_conj_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_conj_physical_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_constant_pad_nd_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_contiguous_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_empty_permuted_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_eq_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_fft_fftshift_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_float_power_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_floor_divide_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_ge_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_grid_sampler_2d_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_int_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_isinf_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_isnan_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_jiterator_binary_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_lerp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_det_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_eigvalsh_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_ldl_solve_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_matrix_rank_hermitian_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_linalg_tensorsolve_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_log1p_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_logical_not_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_logical_xor_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_masked_argmax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_masked_cumprod_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_masked_log_softmax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_masked_logaddexp_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_masked_softmax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_masked_softmin_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_matmul_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_min_binary_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_minimum_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_native_batch_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_new_empty_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_new_ones_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_binary_cross_entropy_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_hinge_embedding_loss_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_layer_norm_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_pad_reflect_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_pad_replicate_negative_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nn_functional_upsample_nearest_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_nonzero_static_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_pinverse_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_rad2deg_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_randn_like_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_round_decimals_neg_3_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_scatter_reduce_amax_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_take_along_dim_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_var_unbiased_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_view_as_complex_cuda_float32, test/test_ops.py::TestCompositeComplianceCUDA::test_view_replay_vstack_cuda_float32, test/test_ops.py::TestMathBitsCUDA::test_conj_view___rmul___cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view___rpow___cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs__conversions_cfloat_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_abs_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_acosh_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_as_strided_partial_views_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_broadcast_tensors_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_cos_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_isfinite_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_linalg_cross_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_linalg_svd_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_log1p_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_masked_fill_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_nn_functional_pairwise_distance_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_nn_functional_pixel_unshuffle_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_nn_functional_softmax_with_dtype_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_repeat_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_t_copy_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_tan_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_tril_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_view_as_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_view_copy_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__refs_where_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view__unsafe_masked_index_put_accumulate_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_alias_copy_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_cumprod_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_diag_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_full_like_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_gather_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_linalg_eigvals_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_linalg_lu_factor_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_linalg_slogdet_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_logical_not_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_mH_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_masked_select_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_movedim_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_mul_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_nn_functional_pad_circular_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_nn_functional_unfold_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_randn_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_resize_as__cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_scatter_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_squeeze_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_stack_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_std_mean_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_t_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_conj_view_unsqueeze_cuda_complex64, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view___rmatmul___cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs__conversions_float_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_all_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_atleast_2d_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_block_diag_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_clone_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_constant_pad_nd_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_cumprod_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_fft_irfft2_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_flip_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_index_add_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_istft_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_linalg_diagonal_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_linalg_svdvals_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_linalg_vecdot_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_logspace_tensor_overload_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_new_empty_strided_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_repeat_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_special_log_softmax_with_dtype_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_special_softmax_with_dtype_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_tan_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_tanh_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view__refs_tril_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_block_diag_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_cartesian_prod_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_chalf_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_cholesky_solve_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_contiguous_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_cos_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_cross_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_empty_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_empty_strided_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_equal_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_expand_copy_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_expand_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_fft_fftshift_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_fliplr_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_linalg_eigvals_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_linalg_lstsq_grad_oriented_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_linalg_pinv_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_linalg_solve_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_linalg_vector_norm_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_masked_select_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_matmul_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nanmean_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_ne_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nn_functional_conv1d_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nn_functional_conv_transpose3d_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nn_functional_l1_loss_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nn_functional_normalize_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nn_functional_pad_circular_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nn_functional_pad_replicate_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nn_functional_triplet_margin_loss_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_nn_functional_triplet_margin_with_distance_loss_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_permute_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_real_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_reciprocal_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_repeat_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_reshape_as_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_slice_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_split_with_sizes_copy_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_svd_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_tensor_split_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_trapz_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_tril_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_true_divide_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_conj_view_view_as_real_cuda_complex128, test/test_ops.py::TestMathBitsCUDA::test_neg_view___rmatmul___cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__chunk_cat_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs__conversions_bfloat16_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_acosh_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_atan2_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_atanh_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_copysign_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_exp2_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_exponential_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_flip_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_fmod_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_linalg_diagonal_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_log_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_logical_xor_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_lt_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_meshgrid_list_of_tensors_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_new_zeros_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_nn_functional_dropout_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_nn_functional_threshold_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_ones_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_permute_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_reshape_as_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_sgn_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_square_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_sum_to_size_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_t_copy_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_triu_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view__refs_view_copy_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_alias_copy_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_bfloat16_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_cdouble_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_chunk_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_clamp_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_conj_physical_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_constant_pad_nd_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_cross_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_cumprod_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_deg2rad_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_diagonal_copy_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_einsum_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_fft_ifftshift_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_fft_ihfft2_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_fft_ihfftn_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_fft_rfftn_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_floor_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_full_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_half_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_inner_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_isfinite_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_isnan_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_isneginf_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_jiterator_unary_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_lerp_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_linalg_det_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_linalg_eigvals_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_linalg_inv_ex_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_linalg_norm_subgradients_at_zero_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_linalg_solve_triangular_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_log_softmax_with_dtype_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_logical_and_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_logspace_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_masked_fill_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_max_pool2d_with_indices_backward_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_max_reduction_no_dim_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_mul_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_new_empty_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_new_empty_strided_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_new_full_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_adaptive_avg_pool2d_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_adaptive_max_pool3d_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_batch_norm_without_cudnn_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_channel_shuffle_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_cosine_embedding_loss_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_dropout_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_embedding_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_hinge_embedding_loss_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_margin_ranking_loss_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_max_pool2d_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_prelu_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_relu_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_nn_functional_threshold_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_polygamma_polygamma_n_4_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_rad2deg_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_scalar_tensor_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_scatter_add_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_sigmoid_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_sparse_sampled_addmm_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_special_modified_bessel_i0_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_special_shifted_chebyshev_polynomial_t_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_special_spherical_bessel_j0_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_to_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_transpose_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_triangular_solve_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_triu_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_var_mean_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_vdot_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_view_copy_cuda_float64, test/test_ops.py::TestMathBitsCUDA::test_neg_view_zeros_cuda_float64, test/test_ops.py::TestFakeTensorCUDA::test_fake__upsample_bilinear2d_aa_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_argmin_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_atleast_1d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast___getitem___cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast___rsub___cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_all_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_clone_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_contiguous_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_div_no_rounding_mode_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_fft_hfft2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_fft_hfftn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_fft_ifftn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_fft_irfft2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_fft_irfft_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_fmod_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_hypot_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_index_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_index_put_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_isclose_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_isinf_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_isreal_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_istft_cuda_complex64, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_kron_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_linalg_cholesky_ex_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_linalg_diagonal_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_linalg_ldl_factor_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_linalg_norm_subgradients_at_zero_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_log1p_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_logical_xor_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_logit_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_logspace_tensor_overload_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_masked_logaddexp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_mm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_avg_pool3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_glu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_hardsigmoid_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_hinge_embedding_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_huber_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_instance_norm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_mse_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_multi_head_attention_forward_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_pad_circular_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nn_functional_pad_replicate_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_nonzero_static_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_ormqr_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_polygamma_polygamma_n_0_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_polygamma_polygamma_n_2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_rot90_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_scalar_tensor_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_sgn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_signal_windows_general_hamming_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_signbit_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_chebyshev_polynomial_u_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_log_ndtr_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_special_spherical_bessel_j0_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_squeeze_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_stft_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_take_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_tile_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_trapz_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_true_divide_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_unflatten_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_unsqueeze_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_autocast_var_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_bincount_cuda_int64, test/test_ops.py::TestFakeTensorCUDA::test_fake_bucketize_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_ceil_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_conj_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp__segment_reduce_lengths_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_as_strided_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_clone_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_diag_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_div_floor_rounding_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_dot_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_dsplit_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_fft_ifft2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_fft_ihfft_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_fft_rfft2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_flipud_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_ldexp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_linalg_inv_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_log10_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_log_softmax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_mT_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_max_reduction_no_dim_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_mode_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_mvlgamma_mvlgamma_p_5_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_dropout3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_fractional_max_pool2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_fractional_max_pool3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_instance_norm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_max_pool2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_max_unpool1d_grad_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_poisson_nll_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_soft_margin_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_nn_functional_tanhshrink_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_put_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_repeat_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_repeat_interleave_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_reshape_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_select_scatter_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_sgn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_sinc_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_special_entr_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_special_i1e_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_special_ndtr_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_sum_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_t_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_t_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_tanh_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_to_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_torch_ops_aten__efficient_attention_forward_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_var_unbiased_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_amp_xlogy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp___getitem___cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp__native_batch_norm_legit_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_addmm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_addr_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_cholesky_inverse_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_cosh_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_dist_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_dsplit_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_einsum_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_fft_irfft2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_fft_irfft_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_float_power_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_inner_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_linalg_diagonal_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_linalg_eig_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_linalg_inv_ex_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_linalg_lu_factor_ex_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_linalg_multi_dot_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_logaddexp2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_masked_norm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_masked_sum_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_median_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_meshgrid_variadic_tensors_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_mul_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_narrow_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_neg_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_channel_shuffle_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_conv3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_fractional_max_pool3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_hardshrink_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_hardswish_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_interpolate_bilinear_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_nn_functional_leaky_relu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_pca_lowrank_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_reciprocal_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_reshape_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_rot90_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_special_i0e_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_special_i1e_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_std_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_sum_to_size_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_triangular_solve_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_crossref_backward_no_amp_var_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_cummax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_div_floor_rounding_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_expand_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_fft_hfftn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_full_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_index_reduce_amin_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_index_select_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_isnan_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_isreal_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_linalg_diagonal_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_linalg_eigvals_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_linalg_matrix_rank_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_linalg_slogdet_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_mH_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_masked_logaddexp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_masked_logsumexp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_masked_mean_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_masked_softmax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_masked_std_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_matrix_exp_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_mean_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_minimum_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nanmean_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_new_ones_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_celu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_gelu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_max_unpool1d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_max_unpool3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_mse_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_pixel_shuffle_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_nn_functional_softplus_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_pow_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_rad2deg_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_resize__cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_round_decimals_3_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_scatter_reduce_amin_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_select_scatter_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_signal_windows_hamming_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_slice_scatter_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_special_bessel_j1_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_special_chebyshev_polynomial_t_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_special_modified_bessel_i0_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_special_scaled_modified_bessel_k1_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_square_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_var_mean_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_var_mean_unbiased_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_fake_view_copy_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops___getitem___cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops__native_batch_norm_legit_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_atan2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_bernoulli_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_bitwise_not_cuda_int64, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_cholesky_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_corrcoef_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_dstack_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_empty_permuted_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_erf_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_exp2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_fft_hfftn_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_fliplr_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_fmin_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_frac_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_hstack_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_igamma_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_inner_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_isin_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_isreal_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_kron_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_kthvalue_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linalg_diagonal_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linalg_eigvalsh_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linalg_inv_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linalg_lstsq_grad_oriented_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linalg_lu_factor_ex_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linalg_solve_ex_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_linalg_solve_triangular_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_log10_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_logaddexp2_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_masked_softmax_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_masked_softmin_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_minimum_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_adaptive_max_pool3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_avg_pool1d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_batch_norm_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_channel_shuffle_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_dropout3d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_embedding_bag_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_hardswish_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_hardtanh_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_max_unpool2d_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_pad_replicate_negative_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_prelu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_relu6_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_relu_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_smooth_l1_loss_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_nn_functional_upsample_nearest_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_normal_number_mean_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_quantile_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_special_bessel_j0_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_special_entr_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_special_spherical_bessel_j0_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_split_list_args_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_square_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_std_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_stft_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_tensordot_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_to_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_torch_ops_aten__flash_attention_forward_cuda_float16, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_transpose_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_true_divide_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_where_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_zero__cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_pointwise_ops_zeros_like_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_arange_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_linspace_cuda_uint8, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_logspace_tensor_overload_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout__refs_ones_cuda_complex128, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_arange_cuda_float64, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_linspace_cuda_float64, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_linspace_tensor_overload_cuda_complex64, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_linspace_tensor_overload_cuda_int16, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_ones_cuda_bool, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_ones_cuda_uint8, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_zeros_cuda_complex32, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_zeros_cuda_float16, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_zeros_cuda_float32, test/test_ops.py::TestFakeTensorCUDA::test_strided_layout_zeros_cuda_int8, test/test_ops.py::TestTagsCUDA::test_tags__batch_norm_with_update_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs__conversions_short_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_addcmul_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_any_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_atleast_1d_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_atleast_3d_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_cauchy_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_eq_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_fft_irfftn_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_flatten_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_flip_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_ge_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_isclose_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_isposinf_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_lerp_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_linspace_tensor_overload_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_neg_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_new_empty_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nextafter_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nn_functional_channel_shuffle_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nn_functional_gelu_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_nn_functional_hardtanh_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_rad2deg_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_repeat_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_special_zeta_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_unfold_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__refs_vsplit_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags__unsafe_masked_index_put_accumulate_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_acosh_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_addr_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_aminmax_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_argsort_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_bernoulli_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_bitwise_xor_cuda_int64, test/test_ops.py::TestTagsCUDA::test_tags_broadcast_tensors_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_cov_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_cummin_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_diag_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_dsplit_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_equal_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_erfinv_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_fft_irfft_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_fft_rfft2_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_float_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_floor_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_full_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_gt_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_hsplit_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_imag_cuda_complex64, test/test_ops.py::TestTagsCUDA::test_tags_index_fill_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_isnan_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_kron_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_linalg_det_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_linalg_det_singular_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_linalg_householder_product_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_linalg_ldl_factor_ex_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_linalg_matrix_power_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_linalg_solve_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_linalg_svd_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_linalg_tensorinv_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_log10_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_logspace_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_logsumexp_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_masked_select_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_masked_std_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_new_empty_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_adaptive_max_pool1d_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_channel_shuffle_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_dropout2d_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_dropout3d_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_fractional_max_pool2d_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_fractional_max_pool3d_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_group_norm_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_kl_div_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_logsigmoid_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_pad_reflect_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_pdist_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_silu_complex_cuda_complex64, test/test_ops.py::TestTagsCUDA::test_tags_nn_functional_threshold_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_rand_like_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_real_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_repeat_interleave_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_resolve_neg_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_scatter_reduce_amax_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_softmax_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_special_airy_ai_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_special_bessel_j1_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_special_modified_bessel_i0_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_special_scaled_modified_bessel_k0_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_special_spherical_bessel_j0_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_sum_to_size_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_t_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_torch_ops_aten__flash_attention_forward_cuda_float16, test/test_ops.py::TestTagsCUDA::test_tags_trace_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_triu_indices_cuda_int64, test/test_ops.py::TestTagsCUDA::test_tags_true_divide_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_unfold_copy_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_vdot_cuda_float32, test/test_ops.py::TestTagsCUDA::test_tags_vsplit_cuda_float32 2024-08-07T18:26:49.5217133Z 2024-08-07T18:26:53.2110580Z Running test_decomp 6/19 ... [2024-08-07 18:26:53.210507] 2024-08-07T18:26:53.2114531Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_decomp.py', '-m', 'not serial', '--shard-id=6', '--num-shards=19', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:26:53.211050] 2024-08-07T18:33:55.4769365Z 2024-08-07T18:33:55.4770640Z test_decomp 6/19 was successful, full logs can be found in artifacts with path test/test-reports/test_decomp_6.19_8cbf9f879dfc1640_.log 2024-08-07T18:33:55.4953453Z Running 485 items in this shard: test/test_decomp.py::TestDecompCUDA::test_batch_norm_unflatten_weight_bias_cuda, test/test_decomp.py::TestDecompCUDA::test_comprehensive___getitem___cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive___getitem___cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive___radd___cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rmod___cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rmul___cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rpow___cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rpow___cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rsub___cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive__chunk_cat_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive__unsafe_masked_index_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_acosh_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addbmm_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addbmm_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addcdiv_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addr_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_all_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_all_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_angle_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_angle_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_any_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_arange_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_argmin_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_argsort_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_argsort_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_argwhere_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_copy_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_scatter_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_asin_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_asinh_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atan_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atan_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atleast_2d_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atleast_3d_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bincount_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bitwise_not_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bitwise_or_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bitwise_right_shift_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_block_diag_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bmm_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bool_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bucketize_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cartesian_prod_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cartesian_prod_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cdouble_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cfloat_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cholesky_solve_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clamp_min_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_column_stack_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_column_stack_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_combinations_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_contiguous_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cos_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cov_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cross_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cummin_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cummin_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cumulative_trapezoid_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cumulative_trapezoid_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diag_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diag_embed_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diagonal_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diagonal_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diagonal_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diagonal_scatter_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_digamma_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_div_floor_rounding_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_div_floor_rounding_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_div_no_rounding_mode_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_div_no_rounding_mode_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_div_trunc_rounding_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_double_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_dsplit_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_empty_permuted_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_empty_permuted_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_empty_permuted_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_exp2_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_exp_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_expand_copy_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_expm1_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_exponential_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_fft2_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_fft_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_fftn_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_fftshift_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_fftshift_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_hfftn_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_hfftn_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifftn_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ihfft_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_rfft2_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fill_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_flipud_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_float_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fmin_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fmod_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_frac_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_full_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gather_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_ge_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_geometric_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gradient_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_half_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_histc_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_i0_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_add_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_copy_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_copy_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_fill_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_put_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_inner_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_int_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_int_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_int_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isinf_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isinf_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isnan_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isneginf_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isposinf_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_2inputs_2outputs_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_4inputs_with_extra_args_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_4inputs_with_extra_args_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_binary_return_by_ref_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_unary_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_unary_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_kron_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_kthvalue_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_kthvalue_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_ldexp_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_cross_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_det_singular_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_diagonal_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_lstsq_grad_oriented_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_matrix_rank_hermitian_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_norm_subgradients_at_zero_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_pinv_singular_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_qr_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_svd_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_vander_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linspace_tensor_overload_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_log10_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_log2_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_log2_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_log_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_log_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_log_softmax_with_dtype_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logcumsumexp_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logdet_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_and_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_and_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_not_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_long_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_lt_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mT_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_argmax_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_cumsum_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_fill_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_fill_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_logsumexp_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_mean_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_mean_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_prod_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_scatter_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_select_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_sum_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mean_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_meshgrid_list_of_tensors_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_meshgrid_variadic_tensors_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_min_reduction_no_dim_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_min_reduction_no_dim_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_min_reduction_with_dim_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_minimum_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mode_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_movedim_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_movedim_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_msort_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_msort_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mul_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mvlgamma_mvlgamma_p_3_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mvlgamma_mvlgamma_p_5_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nan_to_num_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nanmean_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_native_layer_norm_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_full_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_full_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_ones_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_batch_norm_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_bilinear_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_binary_cross_entropy_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_celu_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_conv2d_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_cross_entropy_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_feature_alpha_dropout_with_train_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_feature_alpha_dropout_without_train_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_fractional_max_pool2d_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_group_norm_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_hardsigmoid_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_hardtanh_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_instance_norm_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_interpolate_bicubic_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_interpolate_nearest-exact_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_interpolate_nearest_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_l1_loss_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_margin_ranking_loss_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_max_unpool3d_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_max_unpool3d_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_multilabel_margin_loss_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_circular_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_constant_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pairwise_distance_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_poisson_nll_loss_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_poisson_nll_loss_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_prelu_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_relu6_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_relu_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_relu_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_rrelu_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_silu_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_softmin_with_dtype_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_softplus_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_tanhshrink_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_tanhshrink_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nonzero_static_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_normal_number_mean_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_ones_like_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_outer_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_permute_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_permute_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_pinverse_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_polygamma_polygamma_n_3_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_polygamma_polygamma_n_4_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_positive_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_positive_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_qr_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_reciprocal_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_reciprocal_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_reciprocal_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_renorm_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_reshape_as_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_reshape_as_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_reshape_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_reshape_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_resize__cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_resolve_conj_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rot90_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rsub_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_add_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_select_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_select_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_select_scatter_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_select_scatter_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sgn_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_signal_windows_blackman_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_signal_windows_cosine_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_softmax_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_softmax_with_dtype_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sparse_sampled_addmm_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_chebyshev_polynomial_u_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_chebyshev_polynomial_w_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_erfcx_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_i1_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_laguerre_polynomial_l_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_log_ndtr_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_modified_bessel_k0_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_polygamma_special_polygamma_n_0_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_scaled_modified_bessel_k0_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_scaled_modified_bessel_k0_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_shifted_chebyshev_polynomial_u_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_zeta_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_split_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_split_list_args_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_split_list_args_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_split_list_args_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_squeeze_multiple_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_squeeze_multiple_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_std_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sum_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sum_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sum_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_t_copy_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_t_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_take_along_dim_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_take_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tanh_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tanh_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tensor_split_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tensor_split_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_to_sparse_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_trace_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_transpose_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_transpose_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tril_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tril_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_triu_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_triu_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_true_divide_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_true_divide_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_trunc_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unbind_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unflatten_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unflatten_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unique_consecutive_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unique_consecutive_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unique_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unique_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_var_mean_unbiased_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_vdot_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_view_as_real_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_view_copy_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_view_copy_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_vsplit_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_vstack_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_vstack_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_xlogy_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_zeros_like_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_zeros_like_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick__chunk_cat_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick__unsafe_masked_index_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_abs_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_acos_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_acos_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_acosh_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_add_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_all_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_amax_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_amin_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_aminmax_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_aminmax_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_arange_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_as_strided_copy_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_as_strided_copy_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_as_strided_scatter_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_asin_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_asin_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_asin_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_atan2_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_atan_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_atan_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_bitwise_and_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_bitwise_or_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_block_diag_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_cat_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_cat_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_clamp_max_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_clone_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_clone_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_clone_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_conj_physical_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward__unsafe_masked_index_put_accumulate_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_nan_to_num_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_nn_functional_glu_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_trace_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_transpose_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_xlogy_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_cosh_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_cumsum_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_deg2rad_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_diagonal_scatter_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_div_floor_rounding_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_div_floor_rounding_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_div_trunc_rounding_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_empty_like_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_empty_strided_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_eq_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_eq_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_erf_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_expand_copy_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_eye_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_fft_fft2_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_fft_hfft2_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_fft_hfft2_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ihfftn_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_fft_rfftn_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_flip_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_floor_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_floor_divide_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_ge_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_hypot_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_i0_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_igamma_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_index_add_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_index_copy_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_index_copy_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_index_copy_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_index_fill_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_index_select_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_index_select_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_isnan_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_isposinf_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_lgamma_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_linalg_diagonal_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_linspace_tensor_overload_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_log10_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_log1p_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_log2_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_logaddexp2_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_logical_and_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_logical_and_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_logical_not_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_logical_or_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_logit_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_logspace_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_logspace_tensor_overload_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_lt_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_masked_fill_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_masked_fill_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_meshgrid_list_of_tensors_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_mul_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_mul_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_mvlgamma_mvlgamma_p_5_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_nan_to_num_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_native_dropout_backward_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_ne_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_neg_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_new_empty_strided_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_new_full_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_new_full_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_new_zeros_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_new_zeros_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_elu_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_hardtanh_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_huber_loss_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_huber_loss_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_rrelu_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_ones_like_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_permute_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_prod_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_randn_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_roll_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_roll_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_roll_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_round_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_round_decimals_3_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_round_decimals_3_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_rsqrt_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_rsub_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_rsub_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_select_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_select_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_sign_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_signbit_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_sin_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_sin_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_sinc_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_sinh_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_sinh_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_sinh_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_slice_scatter_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_special_erfcx_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_special_i1e_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_special_log_ndtr_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_special_ndtri_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_special_xlog1py_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_special_xlog1py_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_special_xlog1py_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_split_list_args_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_squeeze_multiple_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_stack_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_std_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_sub_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_sub_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_sum_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_t_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_tanh_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_trace_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_transpose_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_transpose_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_tril_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_triu_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_unfold_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_unsafe_split_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_unsqueeze_copy_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_unsqueeze_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_var_mean_unbiased_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_view_copy_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_view_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_view_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_where_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_xlogy_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_zero__cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_like_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_like_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_rnn_decomp_module_nn_GRU_train_mode_cuda_float32 2024-08-07T18:33:55.5128642Z 2024-08-07T18:33:59.2105688Z Running test_decomp 11/19 ... [2024-08-07 18:33:59.210034] 2024-08-07T18:33:59.2109991Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_decomp.py', '-m', 'not serial', '--shard-id=11', '--num-shards=19', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:33:59.210580] 2024-08-07T18:37:17.9560725Z 2024-08-07T18:37:17.9561555Z test_decomp 1/19 was successful, full logs can be found in artifacts with path test/test-reports/test_decomp_1.19_e0ec0d2b7659c95d_.log 2024-08-07T18:37:17.9717380Z Running 417 items in this shard: test/test_decomp.py::TestDecompCUDA::test_comprehensive_H_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_T_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive___getitem___cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive___radd___cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rmatmul___cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rmatmul___cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rmul___cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive___ror___cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive__native_batch_norm_legit_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive__unsafe_masked_index_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_abs_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_abs_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addcmul_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addmv_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addr_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_all_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_all_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_aminmax_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_argmax_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_argmin_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_copy_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_copy_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_scatter_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atan2_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atanh_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atleast_3d_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atleast_3d_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atleast_3d_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bincount_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bitwise_right_shift_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bitwise_xor_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_broadcast_tensors_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_broadcast_tensors_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bucketize_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cartesian_prod_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_char_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cholesky_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_chunk_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clamp_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clamp_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clamp_max_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clamp_min_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_complex_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_conj_physical_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_copysign_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_corrcoef_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cos_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cosh_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_count_nonzero_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_count_nonzero_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cross_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cummax_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cumulative_trapezoid_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_deg2rad_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_deg2rad_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diagflat_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diagonal_copy_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diagonal_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diagonal_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diff_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_div_no_rounding_mode_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_dot_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_dsplit_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_empty_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_empty_strided_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_eq_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_equal_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_erf_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_erf_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_erfc_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_erfinv_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_exp_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_expand_as_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_expand_copy_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_expand_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_eye_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_eye_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_eye_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_fft2_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_hfft2_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_hfftn_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifft2_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifft2_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifftshift_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifftshift_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ihfft2_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_irfftn_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_rfft_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_rfftn_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fill_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fill_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_flipud_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_float_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_floor_divide_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_floor_divide_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_floor_divide_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fmax_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fmin_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_full_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_full_like_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_full_like_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gather_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gcd_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_geometric_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_geqrf_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_grid_sampler_2d_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gt_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_histc_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_copy_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_int_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isclose_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isfinite_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isin_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isinf_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_item_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_item_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_kron_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_le_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_lgamma_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_cross_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_diagonal_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_eigvals_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_ldl_factor_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_ldl_solve_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_lu_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_lu_factor_ex_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_matrix_rank_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_matrix_rank_hermitian_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_vector_norm_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linspace_tensor_overload_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_log_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_log_softmax_with_dtype_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_and_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logit_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logit_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logspace_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logspace_tensor_overload_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logspace_tensor_overload_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logsumexp_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_lu_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mH_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_amax_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_amax_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_amin_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_cumsum_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_var_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_var_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_max_reduction_no_dim_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_meshgrid_list_of_tensors_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_min_reduction_no_dim_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_minimum_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_movedim_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mul_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nan_to_num_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nansum_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_narrow_copy_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_narrow_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_narrow_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_neg_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_full_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_adaptive_avg_pool2d_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_adaptive_max_pool1d_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_batch_norm_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_channel_shuffle_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_channel_shuffle_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_conv1d_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_conv2d_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_conv_transpose3d_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_dropout2d_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_gelu_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_grid_sample_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_group_norm_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_hardswish_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_hardtanh_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_interpolate_linear_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_interpolate_nearest-exact_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_leaky_relu_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_local_response_norm_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_margin_ranking_loss_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_multi_margin_loss_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_multi_margin_loss_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_normalize_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_reflect_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_reflect_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pixel_unshuffle_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_rms_norm_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_silu_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_softmin_with_dtype_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_softplus_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_triplet_margin_loss_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_triplet_margin_loss_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nonzero_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nonzero_static_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_norm_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_norm_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_normal_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_ormqr_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_polygamma_polygamma_n_3_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_polygamma_polygamma_n_3_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_pow_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_quantile_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rad2deg_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rad2deg_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_randint_like_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_randn_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_ravel_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_remainder_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_remainder_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_reshape_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_resolve_conj_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_resolve_neg_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_resolve_neg_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_roll_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_round_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rsqrt_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rsqrt_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rsqrt_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scalar_tensor_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_reduce_amin_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_reduce_amin_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_reduce_sum_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_reduce_sum_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_searchsorted_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_select_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_select_scatter_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_short_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sigmoid_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sigmoid_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_signal_windows_gaussian_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_signal_windows_hann_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_signbit_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_signbit_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sin_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sin_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sinh_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_slice_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_slice_scatter_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_slice_scatter_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_slice_scatter_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_slice_scatter_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_bessel_j0_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_bessel_j0_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_bessel_y0_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_bessel_y0_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_bessel_y0_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_chebyshev_polynomial_t_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_chebyshev_polynomial_t_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_chebyshev_polynomial_v_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_erfcx_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_erfcx_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_hermite_polynomial_h_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_i0e_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_i0e_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_modified_bessel_i0_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_modified_bessel_i0_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_shifted_chebyshev_polynomial_t_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_split_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_squeeze_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sub_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sub_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_svd_lowrank_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_t_copy_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_t_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_take_along_dim_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tan_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tile_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_topk_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_triu_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_trunc_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unfold_copy_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unravel_index_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsafe_chunk_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsafe_chunk_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsafe_split_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsafe_split_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsqueeze_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsqueeze_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_var_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_var_unbiased_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_view_as_real_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_vsplit_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_zero__cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_masked_fill_cuda, test/test_decomp.py::TestDecompCUDA::test_quick__unsafe_masked_index_put_accumulate_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_acosh_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_add_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_addcdiv_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_addcdiv_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_alias_copy_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_all_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_amin_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_atan2_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_bitwise_not_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_bitwise_right_shift_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_bitwise_xor_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_clamp_min_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_copysign_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_rad2deg_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_renorm_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_split_list_args_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_std_mean_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_count_nonzero_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_count_nonzero_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_cumprod_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_deg2rad_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_div_floor_rounding_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_div_no_rounding_mode_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_empty_strided_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_empty_strided_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_erf_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_erfc_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_erfinv_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_exp_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_expand_copy_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_expand_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_expm1_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_eye_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_eye_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_fft_hfft2_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_fft_hfftn_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ifft_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ihfft2_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ihfftn_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ihfftn_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_fft_irfft_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_fft_irfftn_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_fft_rfft2_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_fft_rfft2_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_fft_rfft_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_fft_rfft_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_fill_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_flip_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_floor_divide_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_floor_divide_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_fmin_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_fmin_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_full_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_gcd_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_gt_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_index_add_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_index_fill_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_isin_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_isinf_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_isinf_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_isneginf_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_lcm_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_linalg_cross_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_linspace_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_log_normal_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_logaddexp_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_logical_or_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_logspace_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_logspace_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_logspace_tensor_overload_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_masked_fill_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_meshgrid_list_of_tensors_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_native_layer_norm_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_ne_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_neg_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_new_empty_strided_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_new_zeros_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_hardsigmoid_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_hardswish_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_pad_constant_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_silu_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_norm_fro_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_ones_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_permute_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_pow_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_pow_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_pow_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_repeat_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_repeat_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_repeat_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_round_decimals_neg_3_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_rsub_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_sigmoid_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_sigmoid_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_sigmoid_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_sign_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_sin_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_slice_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_softmax_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_special_i1e_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_special_ndtri_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_split_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_split_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_split_with_sizes_copy_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_split_with_sizes_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_sqrt_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_squeeze_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_sub_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_sum_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_t_copy_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_t_copy_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_t_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_take_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_take_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_tan_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_transpose_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_transpose_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_tril_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_triu_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_trunc_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_trunc_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_unfold_copy_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_uniform_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_unsafe_split_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_unsqueeze_copy_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_view_copy_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_view_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_zero__cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_like_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_rnn_decomp_module_nn_LSTM_eval_mode_cuda_float64, test/test_decomp.py::DecompOneOffTestsCUDA::test_contiguous_softmax_cuda 2024-08-07T18:37:17.9866421Z 2024-08-07T18:37:21.6825201Z Running test_decomp 16/19 ... [2024-08-07 18:37:21.682011] 2024-08-07T18:37:21.6828926Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_decomp.py', '-m', 'not serial', '--shard-id=16', '--num-shards=19', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:37:21.682512] 2024-08-07T18:43:07.4272839Z 2024-08-07T18:43:07.4277240Z test_decomp 11/19 was successful, full logs can be found in artifacts with path test/test-reports/test_decomp_11.19_d3ddd556460f341c_.log 2024-08-07T18:43:07.4464947Z Running 495 items in this shard: test/test_decomp.py::TestDecompCUDA::test_comprehensive_H_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive___radd___cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rand___cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rdiv___cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rdiv___cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rmatmul___cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rmod___cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rpow___cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rxor___cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive__unsafe_masked_index_put_accumulate_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_add_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addmm_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addmm_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addmv_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addr_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_alias_copy_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_all_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_amax_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_argmax_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_argwhere_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_argwhere_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_partial_views_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_asin_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_asinh_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_asinh_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_asinh_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atan2_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atan2_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atanh_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atleast_1d_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atleast_1d_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atleast_2d_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bernoulli_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bernoulli_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bincount_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bitwise_not_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bool_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_broadcast_tensors_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_broadcast_to_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bucketize_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bucketize_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bucketize_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_byte_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cartesian_prod_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cauchy_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cdouble_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cdouble_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_ceil_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_chalf_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_chalf_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cholesky_solve_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_chunk_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clamp_max_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clamp_min_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_column_stack_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_complex_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_conj_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_conj_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_constant_pad_nd_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_contiguous_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_contiguous_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_corrcoef_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cos_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cosh_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_count_nonzero_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cummin_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cummin_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cumulative_trapezoid_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_deg2rad_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diff_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diff_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_dsplit_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_dsplit_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_empty_permuted_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_eq_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_equal_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_erf_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_erfc_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_expand_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_eye_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_fftn_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_fftn_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_fftshift_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_hfft2_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_hfft_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_hfft_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_hfft_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_hfftn_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifft2_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifft_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifftn_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifftn_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ihfft2_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ihfftn_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ihfftn_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_irfft_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_irfftn_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_irfftn_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_irfftn_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_rfft_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fill_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fill_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fill_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fliplr_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_float_power_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fmax_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fmod_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gather_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_ge_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gt_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_hsplit_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_hstack_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_hypot_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_hypot_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_i0_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_igamma_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_imag_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_fill_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_reduce_amax_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_select_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_inner_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isclose_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isfinite_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isinf_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isnan_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isneginf_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_2inputs_2outputs_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_2inputs_2outputs_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_binary_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_binary_return_by_ref_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_binary_return_by_ref_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_binary_return_by_ref_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_unary_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_kthvalue_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_lerp_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_cholesky_ex_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_cross_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_cross_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_det_singular_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_householder_product_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_householder_product_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_lstsq_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_matrix_rank_hermitian_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_slogdet_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_svdvals_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_vecdot_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linspace_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linspace_tensor_overload_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_log_normal_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logcumsumexp_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logdet_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_and_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_not_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_not_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_not_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logspace_tensor_overload_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mH_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mH_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mT_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mT_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_argmax_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_fill_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_log_softmax_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_logsumexp_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_scatter_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_scatter_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_select_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_std_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_std_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_var_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_matrix_exp_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_max_pool2d_with_indices_backward_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_max_reduction_no_dim_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_max_reduction_with_dim_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_max_reduction_with_dim_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_maximum_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_meshgrid_variadic_tensors_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_min_binary_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_min_reduction_with_dim_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_minimum_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mode_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_movedim_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_msort_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mul_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nansum_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nansum_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_narrow_copy_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_narrow_copy_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_narrow_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_ne_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_neg_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_empty_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_empty_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_empty_strided_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_empty_strided_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_full_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_ones_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_zeros_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_adaptive_max_pool1d_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_avg_pool3d_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_avg_pool3d_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_avg_pool3d_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_batch_norm_without_cudnn_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_binary_cross_entropy_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_channel_shuffle_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_conv_transpose1d_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_conv_transpose1d_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_conv_transpose3d_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_cosine_embedding_loss_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_embedding_bag_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_feature_alpha_dropout_without_train_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_feature_alpha_dropout_without_train_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_feature_alpha_dropout_without_train_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_hinge_embedding_loss_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_instance_norm_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_leaky_relu_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_margin_ranking_loss_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_one_hot_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_constant_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_reflect_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_replicate_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_replicate_negative_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_replicate_negative_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_replicate_negative_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pairwise_distance_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pdist_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pixel_shuffle_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_scaled_dot_product_attention_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_soft_margin_loss_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_softmin_with_dtype_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_softsign_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_softsign_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_tanhshrink_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_threshold_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_unfold_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_upsample_nearest_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_upsample_nearest_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nonzero_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nonzero_static_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nonzero_static_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_pca_lowrank_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_permute_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_permute_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_pinverse_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_polygamma_polygamma_n_2_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_polygamma_polygamma_n_2_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_polygamma_polygamma_n_4_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_pow_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_prod_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rand_like_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_randint_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_randn_like_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_randn_like_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_real_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_real_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_remainder_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_reshape_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_reshape_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_resize__cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_resize_as__cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rsqrt_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_reduce_mean_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_reduce_prod_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_select_scatter_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sgn_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sgn_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_short_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_short_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sigmoid_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sigmoid_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_signal_windows_general_hamming_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sin_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sinc_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_slice_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_slice_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sort_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sort_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_airy_ai_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_bessel_y0_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_bessel_y1_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_chebyshev_polynomial_t_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_chebyshev_polynomial_v_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_chebyshev_polynomial_w_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_chebyshev_polynomial_w_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_erfcx_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_hermite_polynomial_h_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_i1e_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_laguerre_polynomial_l_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_modified_bessel_k0_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_modified_bessel_k1_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_ndtri_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_shifted_chebyshev_polynomial_u_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_shifted_chebyshev_polynomial_u_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_spherical_bessel_j0_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_spherical_bessel_j0_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_split_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_split_with_sizes_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_std_mean_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_std_mean_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sum_to_size_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_svd_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_t_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_t_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_take_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tanh_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tanh_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_transpose_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_trapz_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_triu_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_triu_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_true_divide_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unfold_copy_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unfold_copy_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unfold_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unfold_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unravel_index_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsafe_split_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsqueeze_copy_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsqueeze_copy_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsqueeze_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsqueeze_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_var_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_var_mean_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_view_as_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_view_as_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_view_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_vstack_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_zeros_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_zeros_like_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_zeros_like_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick__chunk_cat_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick__chunk_cat_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick__native_batch_norm_legit_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick__unsafe_masked_index_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick__unsafe_masked_index_put_accumulate_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_addcdiv_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_addcmul_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_addcmul_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_addcmul_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_addmv_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_alias_copy_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_all_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_any_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_as_strided_copy_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_asinh_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_asinh_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_atan_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_baddbmm_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_bitwise_left_shift_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_bitwise_or_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_bitwise_right_shift_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_cat_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_cat_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_ceil_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_ceil_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_clamp_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_clamp_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_clamp_max_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_clamp_max_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_conj_physical_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_conj_physical_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_constant_pad_nd_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_constant_pad_nd_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_nn_functional_hardswish_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_special_entr_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_t_copy_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_vdot_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_zero__cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_cos_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_cumprod_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_cumsum_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_cumsum_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_diag_embed_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_diag_embed_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_diagonal_copy_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_diagonal_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_digamma_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_div_no_rounding_mode_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_div_no_rounding_mode_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_empty_like_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_eq_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_exp_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_exp_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_expand_copy_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_expand_copy_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_fft_fftn_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_fft_fftn_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_fft_hfftn_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ifft2_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ifft2_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ifft_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ihfft_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_fft_irfft2_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_fft_irfft_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_fft_rfft2_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_fmin_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_fmod_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_frac_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_ge_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_grid_sampler_2d_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_index_add_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_index_copy_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_isinf_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_isnan_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_isnan_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_isneginf_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_item_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_le_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_linalg_cross_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_linalg_diagonal_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_linspace_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_log2_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_log_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_logical_not_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_logical_xor_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_logspace_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_logspace_tensor_overload_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_logsumexp_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_logsumexp_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_masked_fill_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_masked_fill_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_maximum_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_meshgrid_variadic_tensors_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_minimum_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_minimum_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_mul_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_mul_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_mul_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_mvlgamma_mvlgamma_p_3_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_nansum_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_native_batch_norm_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_native_layer_norm_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_new_full_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_new_ones_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_new_ones_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_new_ones_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_rrelu_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_unfold_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_norm_fro_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_norm_nuc_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_normal_in_place_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_normal_number_mean_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_ones_like_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_ones_like_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_pow_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_pow_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_rad2deg_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_rad2deg_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_roll_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_roll_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_rot90_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_round_decimals_neg_3_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_select_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_select_scatter_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_sgn_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_sign_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_slice_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_special_i0e_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_special_i1_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_special_i1_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_special_i1e_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_special_i1e_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_special_log_ndtr_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_special_ndtri_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_split_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_sqrt_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_sqrt_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_squeeze_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_squeeze_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_std_mean_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_sub_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_t_copy_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_t_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_take_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_tanh_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_tril_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_tril_indices_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_triu_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_unbind_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_unbind_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_unfold_copy_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_unsafe_split_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_unsqueeze_copy_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_var_unbiased_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_xlogy_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_xlogy_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_xlogy_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_zero__cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_zero__cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_like_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_like_cuda_float64, test/test_decomp.py::DecompOneOffTestsCUDA::test_sdpa_nn_functional_scaled_dot_product_attention_cuda_float16, test/test_decomp.py::DecompOneOffTestsCUDA::test_sdpa_nn_functional_scaled_dot_product_attention_cuda_float64 2024-08-07T18:43:07.4646530Z 2024-08-07T18:43:11.2301531Z Running test_modules 2/2 ... [2024-08-07 18:43:11.229643] 2024-08-07T18:43:11.2306696Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_modules.py', '-m', 'not serial', '--shard-id=2', '--num-shards=2', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:43:11.230225] 2024-08-07T18:45:11.2117524Z 2024-08-07T18:45:11.2118612Z test_decomp 16/19 was successful, full logs can be found in artifacts with path test/test-reports/test_decomp_16.19_a509a51586ebc7b6_.log 2024-08-07T18:45:11.2290064Z Running 460 items in this shard: test/test_decomp.py::TestDecompCUDA::test_comprehensive_H_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_H_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_T_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rand___cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rmul___cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive___rpow___cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive__softmax_backward_data_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive__unsafe_masked_index_put_accumulate_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive__unsafe_masked_index_put_accumulate_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_acosh_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_acosh_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_add_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addr_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_addr_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_alias_copy_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_alias_copy_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_alias_copy_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_all_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_amax_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_aminmax_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_angle_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_any_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_any_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_argmax_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_copy_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_partial_views_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_scatter_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_scatter_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_as_strided_scatter_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_asinh_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atan2_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atanh_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_atleast_3d_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_baddbmm_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bfloat16_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bfloat16_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bitwise_and_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bitwise_not_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_bool_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_broadcast_shapes_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_broadcast_tensors_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_broadcast_to_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cat_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cat_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cdouble_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cfloat_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_chunk_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clamp_max_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clamp_max_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clamp_min_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_clone_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_column_stack_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_column_stack_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_combinations_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_conj_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_conj_physical_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_corrcoef_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cov_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cummin_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cummin_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_cumprod_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diagonal_copy_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diagonal_scatter_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_diff_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_dist_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_double_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_dstack_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_einsum_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_einsum_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_empty_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_empty_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_empty_like_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_empty_like_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_erfc_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_erfc_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_exp2_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_expm1_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifft2_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifftshift_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_ifftshift_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_irfft2_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_irfft2_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_irfft_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_irfftn_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_rfftn_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fft_rfftn_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fill_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fliplr_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_float_power_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_floor_divide_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fmax_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fmin_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fmod_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_fmod_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_frac_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_frexp_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gcd_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gradient_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gt_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_gt_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_heaviside_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_hstack_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_add_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_add_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_put_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_put_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_put_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_reduce_amax_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_reduce_amax_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_reduce_amin_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_select_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_index_select_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_inner_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isclose_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_isposinf_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_2inputs_2outputs_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_4inputs_with_extra_args_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_jiterator_binary_return_by_ref_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_kthvalue_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_lgamma_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_diagonal_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_eigh_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_lu_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_matrix_power_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_matrix_power_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_matrix_rank_hermitian_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_norm_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_norm_subgradients_at_zero_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_pinv_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_pinv_hermitian_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_pinv_hermitian_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_solve_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linalg_vander_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_linspace_tensor_overload_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_log_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logdet_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logdet_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_and_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_xor_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logical_xor_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_logsumexp_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_lu_unpack_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mT_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_amax_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_argmin_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_cumprod_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_fill_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_logaddexp_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_prod_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_select_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_masked_softmin_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_matmul_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_max_reduction_no_dim_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mean_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mean_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_meshgrid_list_of_tensors_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_meshgrid_variadic_tensors_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_meshgrid_variadic_tensors_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_min_reduction_no_dim_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_minimum_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_minimum_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_msort_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mul_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mvlgamma_mvlgamma_p_1_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_mvlgamma_mvlgamma_p_3_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nanquantile_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_narrow_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_native_batch_norm_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_native_dropout_backward_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_empty_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_empty_strided_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_zeros_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_new_zeros_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_adaptive_max_pool2d_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_avg_pool1d_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_binary_cross_entropy_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_conv1d_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_conv_transpose2d_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_conv_transpose3d_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_ctc_loss_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_fractional_max_pool3d_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_gaussian_nll_loss_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_gaussian_nll_loss_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_huber_loss_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_huber_loss_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_leaky_relu_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_logsigmoid_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_max_unpool1d_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_max_unpool1d_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_max_unpool1d_grad_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_mse_loss_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_circular_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_constant_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_constant_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_reflect_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_replicate_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_replicate_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_pad_replicate_negative_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_poisson_nll_loss_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_relu6_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_smooth_l1_loss_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_soft_margin_loss_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_soft_margin_loss_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_softmin_with_dtype_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_tanhshrink_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_triplet_margin_with_distance_loss_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_triplet_margin_with_distance_loss_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_nn_functional_unfold_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_norm_fro_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_norm_nuc_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_ones_like_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_outer_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_permute_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_permute_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_polygamma_polygamma_n_0_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_polygamma_polygamma_n_1_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_positive_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_pow_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_pow_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_randn_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_ravel_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_remainder_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_remainder_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_repeat_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_resize__cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_resolve_conj_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rot90_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rot90_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_round_decimals_3_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_round_decimals_3_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rsub_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rsub_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_rsub_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_reduce_amin_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_reduce_mean_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_scatter_reduce_sum_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_searchsorted_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sgn_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sigmoid_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_signal_windows_general_cosine_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sin_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_slice_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_softmax_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sort_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sort_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_sparse_mm_reduce_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_bessel_j0_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_chebyshev_polynomial_w_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_erfcx_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_hermite_polynomial_h_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_hermite_polynomial_he_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_i0e_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_i1_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_i1e_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_modified_bessel_k1_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_modified_bessel_k1_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_ndtr_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_shifted_chebyshev_polynomial_v_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_special_spherical_bessel_j0_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_split_list_args_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_squeeze_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_stack_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_std_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_std_mean_unbiased_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_t_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_take_along_dim_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_take_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_take_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tan_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tensor_split_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tensordot_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_to_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_to_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_to_sparse_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_torch_ops_aten__flash_attention_forward_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_trapezoid_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_trapezoid_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_trapz_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_tril_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_triu_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_triu_indices_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unfold_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unfold_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unique_consecutive_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unique_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_unsafe_split_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_vdot_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_view_as_complex_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_comprehensive_view_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_comprehensive_xlogy_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_comprehensive_xlogy_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_comprehensive_zero__cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_comprehensive_zeros_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_comprehensive_zeros_like_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick__batch_norm_with_update_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick__chunk_cat_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick__softmax_backward_data_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick__unsafe_masked_index_put_accumulate_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick__unsafe_masked_index_put_accumulate_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_acos_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_acosh_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_add_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_addcmul_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_addr_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_addr_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_alias_copy_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_alias_copy_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_any_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_arange_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_as_strided_scatter_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_asin_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_asin_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_atan2_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_baddbmm_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_bitwise_not_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_bitwise_right_shift_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_block_diag_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_block_diag_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_cat_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_cat_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_cat_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_clone_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_clone_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_clone_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_rot90_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_std_mean_unbiased_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_unbind_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_core_backward_unsafe_split_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_cosh_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_cosh_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_cumprod_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_cumprod_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_cumsum_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_diag_embed_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_diagonal_copy_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_diagonal_scatter_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_digamma_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_dist_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_div_trunc_rounding_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_div_trunc_rounding_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_empty_strided_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_erf_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_erfinv_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_exp2_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_expand_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_expm1_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_expm1_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_exponential_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_eye_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_fft_fftn_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_fft_fftn_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ifft_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_fft_ifftn_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_fft_irfft_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_fft_irfft_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_fill_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_flip_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_fmin_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_frac_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_frexp_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_full_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_full_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_full_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_gcd_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_gt_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_heaviside_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_i0_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_i0_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_igamma_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_index_add_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_index_copy_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_index_fill_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_index_fill_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_index_fill_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_index_select_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_isin_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_isposinf_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_item_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_lgamma_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_linalg_cross_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_linspace_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_log10_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_logical_and_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_logical_or_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_logical_xor_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_logit_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_lt_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_quick_masked_fill_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_meshgrid_variadic_tensors_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_mvlgamma_mvlgamma_p_5_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_nan_to_num_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_narrow_copy_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_native_batch_norm_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_native_dropout_backward_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_ne_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_ne_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_ne_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_new_empty_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_nextafter_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_embedding_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_embedding_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_leaky_relu_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_relu6_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_relu6_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_relu6_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_relu6_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_nn_functional_softplus_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_norm_fro_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_norm_inf_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_normal_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_normal_in_place_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_ones_like_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_ones_like_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_prod_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_randn_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_reciprocal_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_reciprocal_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_remainder_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_roll_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_select_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_select_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_sigmoid_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_sinh_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_slice_scatter_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_special_entr_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_special_i0e_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_special_i0e_cuda_uint8, test/test_decomp.py::TestDecompCUDA::test_quick_special_i1e_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_special_log_ndtr_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_special_xlog1py_cuda_int32, test/test_decomp.py::TestDecompCUDA::test_quick_split_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_split_with_sizes_copy_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_split_with_sizes_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_squeeze_multiple_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_squeeze_multiple_cuda_int16, test/test_decomp.py::TestDecompCUDA::test_quick_stack_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_std_mean_unbiased_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_std_unbiased_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_sub_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_tan_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_tanh_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_transpose_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_tril_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_tril_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_unbind_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_unfold_copy_cuda_float32, test/test_decomp.py::TestDecompCUDA::test_quick_unfold_cuda_complex32, test/test_decomp.py::TestDecompCUDA::test_quick_unsqueeze_copy_cuda_float64, test/test_decomp.py::TestDecompCUDA::test_quick_unsqueeze_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_var_cuda_complex128, test/test_decomp.py::TestDecompCUDA::test_quick_view_cuda_float16, test/test_decomp.py::TestDecompCUDA::test_quick_where_cuda_int64, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_like_cuda_bfloat16, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_like_cuda_bool, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_like_cuda_complex64, test/test_decomp.py::TestDecompCUDA::test_quick_zeros_like_cuda_int8, test/test_decomp.py::TestDecompCUDA::test_uniform_cuda 2024-08-07T18:45:11.2455968Z 2024-08-07T18:45:15.0330036Z Running test_nestedtensor 1/1 ... [2024-08-07 18:45:15.032448] 2024-08-07T18:45:15.0334402Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_nestedtensor.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:45:15.033004] 2024-08-07T18:48:03.3216365Z 2024-08-07T18:48:03.3217404Z test_nestedtensor 1/1 was successful, full logs can be found in artifacts with path test/test-reports/test_nestedtensor_1.1_f8f817cb989c2891_.log 2024-08-07T18:48:03.4027076Z Running 1412 items in this shard: test/test_nestedtensor.py::TestNestedTensor::test_2d_nested_tensor_batch_size_2_max_seq_len_3_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_2d_nested_tensor_batch_size_2_max_seq_len_3_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_2d_nested_tensor_batch_size_2_max_seq_len_5_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_2d_nested_tensor_batch_size_2_max_seq_len_5_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_2d_nested_tensor_batch_size_4_max_seq_len_3_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_2d_nested_tensor_batch_size_4_max_seq_len_3_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_2d_nested_tensor_batch_size_4_max_seq_len_5_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_2d_nested_tensor_batch_size_4_max_seq_len_5_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_batch_size_2_max_seq_len_3_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_batch_size_2_max_seq_len_3_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_batch_size_2_max_seq_len_5_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_batch_size_2_max_seq_len_5_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_batch_size_4_max_seq_len_3_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_batch_size_4_max_seq_len_3_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_batch_size_4_max_seq_len_5_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_batch_size_4_max_seq_len_5_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_float_batch_size_2_max_seq_len_3_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_float_batch_size_2_max_seq_len_3_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_float_batch_size_2_max_seq_len_5_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_float_batch_size_2_max_seq_len_5_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_float_batch_size_4_max_seq_len_3_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_float_batch_size_4_max_seq_len_3_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_float_batch_size_4_max_seq_len_5_vocab_size_10, test/test_nestedtensor.py::TestNestedTensor::test_3d_nested_tensor_float_batch_size_4_max_seq_len_5_vocab_size_20, test/test_nestedtensor.py::TestNestedTensor::test_cat, test/test_nestedtensor.py::TestNestedTensor::test_copy_, test/test_nestedtensor.py::TestNestedTensor::test_default_nested_tensor, test/test_nestedtensor.py::TestNestedTensor::test_dim, test/test_nestedtensor.py::TestNestedTensor::test_fill_, test/test_nestedtensor.py::TestNestedTensor::test_is_contiguous, test/test_nestedtensor.py::TestNestedTensor::test_like_functions_ones_like, test/test_nestedtensor.py::TestNestedTensor::test_like_functions_randn_like, test/test_nestedtensor.py::TestNestedTensor::test_like_functions_zeros_like, test/test_nestedtensor.py::TestNestedTensor::test_nested_namespace, test/test_nestedtensor.py::TestNestedTensor::test_nested_tensor, test/test_nestedtensor.py::TestNestedTensor::test_nested_tensor_matching_dim, test/test_nestedtensor.py::TestNestedTensor::test_numel, test/test_nestedtensor.py::TestNestedTensor::test_repr_string, test/test_nestedtensor.py::TestNestedTensor::test_size, test/test_nestedtensor.py::TestNestedTensor::test_size_dim, test/test_nestedtensor.py::TestNestedTensor::test_stride, test/test_nestedtensor.py::TestNestedTensor::test_to, test/test_nestedtensor.py::TestNestedTensor::test_to_padded_tensor_on_empty_tensor, test/test_nestedtensor.py::TestNestedTensor::test_unbind_0, test/test_nestedtensor.py::TestNestedTensor::test_unbind_1, test/test_nestedtensor.py::TestNestedTensor::test_unbind_3, test/test_nestedtensor.py::TestNestedTensor::test_unbind_4, test/test_nestedtensor.py::TestNestedTensor::test_unbind_dim, test/test_nestedtensor.py::TestNestedTensor::test_zero_, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_abs__cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_abs_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_cos_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_gelu__cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_gelu_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_logical_not_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_neg_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_relu__cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_relu_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_sgn_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_silu__cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_silu_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_sin_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_tanh__cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_activations_tanh_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_binary_ops_with_scalar_eq_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_binary_ops_with_scalar_ge_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_bmm_cpu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_bmm_cpu_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_bmm_cuda_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_bmm_cuda_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_bmm_cuda_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_bmm_noncontiguous_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_bmm_noncontiguous_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_clone_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_clone_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_contiguous_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_contiguous_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_detach_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_detach_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_detach_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_device_checks_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_dropout_jagged_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_dropout_jagged_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_dropout_noncontiguous_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_dropout_noncontiguous_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_dropout_strided_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_dropout_strided_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_embedding_jagged_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_embedding_strided_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_empty_like_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_empty_like_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_empty_like_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_layer_norm_breaking_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_layer_norm_breaking_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_layer_norm_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_layer_norm_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_linear_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_linear_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_linear_noncontiguous_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_linear_noncontiguous_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_masked_fill_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_masked_fill_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_masked_fill_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_matmul_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_matmul_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_matmul_noncontiguous_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_matmul_noncontiguous_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_matmul_nt_with_broadcasted_t_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_matmul_nt_with_broadcasted_t_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_matmul_with_bmm_path_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_matmul_with_bmm_path_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_narrow_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_narrow_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_narrow_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_add_in_place_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_add_in_place_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_add_transpose_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_add_transpose_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_add_transpose_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_add_transpose_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_chunk_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_chunk_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_chunk_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_dense_elementwise_embedding_dim_128_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_dense_elementwise_embedding_dim_128_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_dense_elementwise_embedding_dim_256_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_dense_elementwise_embedding_dim_256_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_dense_elementwise_embedding_dim_384_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_dense_elementwise_embedding_dim_384_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_dense_elementwise_embedding_dim_8_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_dense_elementwise_embedding_dim_8_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_div_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_div_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_indexing_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_indexing_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_indexing_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_indexing_noncontiguous_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_indexing_noncontiguous_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_indexing_noncontiguous_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_mul_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_mul_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_mul_in_place_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_mul_in_place_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_split_with_sizes_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_split_with_sizes_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_split_with_sizes_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_sub_transpose_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_sub_transpose_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_sub_transpose_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_sub_transpose_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_nested_tensor_sum_dim_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_reshape_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_reshape_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_reshape_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_scaled_dot_product_attention_input_dim_3_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_scaled_dot_product_attention_input_dim_4_cuda, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_softmax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_softmax_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_softmax_noncontiguous_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_softmax_noncontiguous_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_squeeze_unsqueeze_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_squeeze_unsqueeze_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_squeeze_unsqueeze_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_dim2_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_dim2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_dim2_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_dim3_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_dim3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_dim3_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_dim4_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_dim4_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_dim4_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_noncontiguous_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_noncontiguous_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_noncontiguous_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_output_size_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_output_size_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_simple_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_simple_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_zero_numel_errors_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_zero_numel_errors_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_padded_tensor_zero_numel_errors_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_to_then_from_padded_tensor_no_transform0213_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_transpose_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_transpose_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_transpose_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_transpose_inference_mode_interaction_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_transpose_inference_mode_interaction_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_transpose_inference_mode_interaction_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_unbind_noncontiguous_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_unbind_noncontiguous_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_unbind_noncontiguous_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_view_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_view_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_view_cuda_float64, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_view_inference_mode_interaction_cuda_float16, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_view_inference_mode_interaction_cuda_float32, test/test_nestedtensor.py::TestNestedTensorDeviceTypeCUDA::test_view_inference_mode_interaction_cuda_float64, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_abs_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_accumulate_grad_different_strides_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_as_nested_tensor_propagates_gradients_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_backward_add_strided_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_backward_for_add_op_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_backward_for_sub_op_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_backward_sub_strided_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_dropout_backward_jagged_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_dropout_backward_strided_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_gelu_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_indexing_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_5d_size_128_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_5d_size_2_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_5d_size_32_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_5d_size_4_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_edge_case_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_size_1023_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_size_1024_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_size_128_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_size_256_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_size_2_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_size_32_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_size_4_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_size_512_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_layer_norm_backward_size_513_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_masked_fill_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_bmm_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_bmm_gradcheck_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_from_list_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_from_mask_and_to_padded_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_from_padded_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_from_padded_fused_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_generates_leaf_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_linear_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_linear_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_linear_plus_transpose_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_matmul_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_matmul_gradcheck_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_reshape_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_reshape_gradcheck_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_softmax_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_squeeze_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_squeeze_gradcheck_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_to_padded_tensor_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_transpose_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_transpose_gradcheck_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_unsqueeze_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_nested_tensor_unsqueeze_gradcheck_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_relu_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_selu_backward_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_set_requires_grad_from_list_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_set_requires_grad_from_mask_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_split_with_sizes_flow_through_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_to_buffer_series_ops_grad_with_broadcast_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_unbind_flow_through_cuda, test/test_nestedtensor.py::TestNestedTensorAutogradCUDA::test_values_grad_with_broadcast_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_apply__cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_False_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_False_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_False_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_False_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_False_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_False_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_True_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_True_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_True_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_True_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_True_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_jagged_requires_grad_True_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_False_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_False_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_False_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_False_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_False_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_False_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_True_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_True_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_True_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_True_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_True_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_0_layout_strided_requires_grad_True_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_False_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_False_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_False_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_False_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_False_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_False_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_True_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_True_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_True_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_True_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_True_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_jagged_requires_grad_True_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_False_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_False_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_False_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_False_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_False_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_False_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_True_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_True_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_True_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_True_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_True_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_1_layout_strided_requires_grad_True_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_False_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_False_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_False_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_False_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_False_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_False_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_True_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_True_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_True_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_True_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_True_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_jagged_requires_grad_True_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_False_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_False_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_False_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_False_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_False_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_False_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_True_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_True_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_True_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_True_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_True_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_2_layout_strided_requires_grad_True_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_False_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_False_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_False_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_False_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_False_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_False_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_True_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_True_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_True_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_True_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_True_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_jagged_requires_grad_True_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_False_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_False_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_False_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_False_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_False_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_False_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_True_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_True_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_True_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_True_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_True_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_3_layout_strided_requires_grad_True_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_False_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_False_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_False_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_False_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_False_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_False_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_True_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_True_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_True_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_True_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_True_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_jagged_requires_grad_True_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_False_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_False_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_False_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_False_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_False_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_False_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_True_contiguous_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_True_contiguous_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_True_contiguous_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_True_contiguous_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_True_contiguous_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_as_nested_tensor_from_tensor_dim_4_layout_strided_requires_grad_True_contiguous_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_binary_pointwise_broadcasting_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_binary_pointwise_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_binary_pointwise_transposed_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_chunk_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_compile_preserves_metadata_cache_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_compile_with_dynamic_max_seq_len_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_compile_with_dynamic_min_seq_len_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_compile_with_propagated_dynamic_max_seq_len_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_device_dtype_transfer_updates_offsets_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_device_dtype_transfer_updates_offsets_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_dummy_mha_with_nt_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_flatten_decomp_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_is_contiguous_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_is_same_size_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_as_nested_tensor_components_require_grad_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_as_nested_tensor_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_as_nested_tensor_components_require_grad_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_as_nested_tensor_components_require_grad_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_as_nested_tensor_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_as_nested_tensor_components_require_grad_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_False_components_require_grad_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_False_components_require_grad_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_False_components_require_grad_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_False_components_require_grad_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_True_components_require_grad_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_True_components_require_grad_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_True_components_require_grad_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_nested_tensor_requires_grad_True_components_require_grad_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_layout_construction_with_pinned_memory_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_mean_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_mean_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_mean_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_mean_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_mean_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_mean_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_mean_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_mean_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_sum_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_sum_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_sum_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_sum_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_sum_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_sum_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_sum_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_op_different_output_shape_dim_sum_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_padded_dense_conversion_kernels_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_padded_dense_conversion_kernels_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_padded_dense_conversion_kernels_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_False_values_is_view_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_False_values_is_view_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_False_values_is_view_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_False_values_is_view_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_False_values_is_view_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_False_values_is_view_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_True_values_is_view_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_True_values_is_view_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_True_values_is_view_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_True_values_is_view_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_True_values_is_view_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_jagged_view_from_values_offsets_requires_grad_True_values_is_view_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_2d_input_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_2d_input_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_2d_input_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_2d_input_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_operate_on_batch_dim_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_operate_on_batch_dim_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_operate_on_batch_dim_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_operate_on_batch_dim_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_reduce_ragged_idx_1_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_reduce_ragged_idx_1_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_reduce_ragged_idx_1_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_reduce_ragged_idx_1_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_with_lengths_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_with_lengths_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_with_lengths_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layer_norm_with_lengths_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_layout_under_torch_dispatch_mode_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_like_shape_empty_like_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_like_shape_randn_like_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_like_value_ones_like_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_like_value_zeros_like_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_linear_nt_dim_3_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_linear_nt_dim_4_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_linear_nt_dim_5_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_keepdim_False_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_keepdim_False_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_keepdim_False_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_keepdim_False_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_keepdim_False_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_keepdim_False_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_keepdim_False_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_keepdim_False_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_reduce_multiple_dims_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_reduce_multiple_dims_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_reduce_multiple_dims_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_mean_dim_reduce_multiple_dims_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_narrow_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_nested_tensor_activation_checkpoint_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_nested_tensor_from_jagged_fx_trace_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_nested_tensor_from_jagged_pass_min_max_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_nested_tensor_from_jagged_pass_min_max_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_njt_cat_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_noncontiguous_pointwise_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_mean_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_mean_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_mean_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_mean_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_mean_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_mean_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_mean_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_mean_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_sum_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_sum_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_sum_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_sum_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_sum_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_sum_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_sum_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_batch_only_different_output_shape_sum_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_mean_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_mean_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_mean_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_mean_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_mean_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_mean_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_mean_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_mean_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_sum_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_sum_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_sum_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_sum_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_sum_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_sum_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_sum_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_1_different_output_shape_sum_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_1_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_1_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_1_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_1_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_1_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_1_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_1_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_1_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_2_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_2_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_2_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_2_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_2_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_2_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_2_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_mean_transpose_offset_2_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_1_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_1_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_1_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_1_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_1_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_1_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_1_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_1_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_2_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_2_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_2_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_2_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_2_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_2_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_2_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_reduce_ragged_idx_greater_than_1_different_output_shape_sum_transpose_offset_2_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_mean_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_mean_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_mean_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_mean_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_mean_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_mean_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_mean_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_mean_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_sum_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_sum_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_sum_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_sum_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_sum_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_sum_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_sum_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_transpose_non_ragged_dim_different_output_shape_sum_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_mean_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_mean_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_mean_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_mean_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_mean_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_mean_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_mean_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_mean_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_sum_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_sum_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_sum_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_sum_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_sum_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_sum_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_sum_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_op_dim_with_lengths_different_output_shape_sum_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_pin_memory_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_profiler_sequence_nr_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_reshape_decomp_requires_grad_False_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_reshape_decomp_requires_grad_True_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_backwards_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_backwards_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_compile_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_compile_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_with_constant_sequence_length_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_with_constant_sequence_length_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_with_constant_sequence_length_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_with_packed_in_proj_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sdpa_with_packed_in_proj_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_False_weights_only_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_False_weights_only_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_False_weights_only_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_False_weights_only_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_False_weights_only_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_False_weights_only_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_True_weights_only_False_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_True_weights_only_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_True_weights_only_False_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_True_weights_only_True_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_True_weights_only_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_serialization_requires_grad_True_weights_only_True_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_1_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_1_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_1_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_1_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_greater_than_1_same_output_shape_transpose_offset_1_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_greater_than_1_same_output_shape_transpose_offset_1_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_greater_than_1_same_output_shape_transpose_offset_1_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_greater_than_1_same_output_shape_transpose_offset_1_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_greater_than_1_same_output_shape_transpose_offset_2_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_greater_than_1_same_output_shape_transpose_offset_2_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_greater_than_1_same_output_shape_transpose_offset_2_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_reduce_ragged_idx_greater_than_1_same_output_shape_transpose_offset_2_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_transpose_non_ragged_dim_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_transpose_non_ragged_dim_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_transpose_non_ragged_dim_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_transpose_non_ragged_dim_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_with_lengths_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_with_lengths_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_with_lengths_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_dim_with_lengths_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_reduce_batch_dim_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_reduce_batch_dim_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_reduce_batch_dim_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_softmax_reduce_batch_dim_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_specialize_dynamic_shape_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_specialize_dynamic_shape_recompile_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_split_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_split_with_sizes_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_squeeze_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_batch_and_non_batch_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_batch_and_non_batch_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_batch_and_non_batch_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_batch_and_non_batch_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_batch_and_non_batch_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_batch_and_non_batch_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_batch_and_non_batch_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_batch_and_non_batch_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_ragged_and_non_batch_keepdim_False_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_ragged_and_non_batch_keepdim_False_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_ragged_and_non_batch_keepdim_False_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_ragged_and_non_batch_keepdim_False_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_ragged_and_non_batch_keepdim_True_requires_grad_False_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_ragged_and_non_batch_keepdim_True_requires_grad_False_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_ragged_and_non_batch_keepdim_True_requires_grad_True_components_require_grad_False_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_sum_dim_reduce_ragged_and_non_batch_keepdim_True_requires_grad_True_components_require_grad_True_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_tensor_attributes_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_threshold_backward_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_to_copy_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unary_pointwise_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unary_pointwise_transposed_inputs_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_backward_cuda_float16, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_backward_cuda_float32, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_backward_cuda_float64, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_lengths_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_lengths_ragged_idx_0_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_lengths_ragged_idx_1_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_lengths_ragged_idx_2_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_lengths_ragged_idx_3_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_lengths_ragged_idx_equals_2_bad_dim_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_transpose_ragged_idx_2_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_transpose_ragged_idx_3_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unbind_transpose_ragged_idx_last_dim_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_unsafe_view_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_view_ragged_idx_not_one_cuda, test/test_nestedtensor.py::TestNestedTensorSubclassCUDA::test_views_inherit_ragged_dim_cuda, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward___radd___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward___rdiv___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward___rmod___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward___rmul___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward___rpow___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward___rsub___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_abs_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_acos_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_acosh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_add_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_amax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_amin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_angle_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_asin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_asinh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_atan2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_atan_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_atanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_bfloat16_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_cdouble_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_ceil_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_cfloat_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_chalf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_clamp_max_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_clamp_min_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_complex_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_conj_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_conj_physical_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_copysign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_cos_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_cosh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_deg2rad_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_digamma_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_div_floor_rounding_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_div_no_rounding_mode_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_div_trunc_rounding_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_double_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_erf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_erfc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_erfinv_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_exp2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_exp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_expm1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_fill_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_float_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_float_power_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_floor_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_fmax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_fmin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_fmod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_frac_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_frexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_half_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_hypot_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_i0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_ldexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_lgamma_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_linalg_vector_norm_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_log10_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_log1p_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_log2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_log_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_logaddexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_logit_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_masked_amax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_masked_amin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_masked_logsumexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_masked_mean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_masked_norm_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_masked_prod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_masked_std_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_masked_sum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_masked_var_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_max_binary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_maximum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_mean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_min_binary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_minimum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_mul_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_mvlgamma_mvlgamma_p_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_mvlgamma_mvlgamma_p_5_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nan_to_num_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nanmean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nansum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_neg_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_celu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_elu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_hardshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_hardsigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_hardtanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_logsigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_mish_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_prelu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_relu6_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_relu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_rrelu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_selu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_silu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_softplus_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_softshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_softsign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_tanhshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_nn_functional_threshold_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_polar_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_polygamma_polygamma_n_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_polygamma_polygamma_n_1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_polygamma_polygamma_n_2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_polygamma_polygamma_n_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_polygamma_polygamma_n_4_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_positive_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_pow_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_prod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_rad2deg_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_real_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_reciprocal_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_remainder_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_round_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_round_decimals_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_round_decimals_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_round_decimals_neg_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_rsqrt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_rsub_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_sgn_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_sigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_sign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_sin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_sinc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_sinh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_special_entr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_special_erfcx_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_special_i0e_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_special_i1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_special_i1e_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_special_log_ndtr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_special_ndtr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_special_ndtri_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_special_xlog1py_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_sqrt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_square_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_std_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_std_unbiased_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_sub_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_sum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_tan_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_tanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_true_divide_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_trunc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_var_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_var_unbiased_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_backward_xlogy_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward___radd___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward___rdiv___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward___rmod___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward___rmul___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward___rpow___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward___rsub___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_abs_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_acos_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_acosh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_add_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_amax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_amin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_angle_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_asin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_asinh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_atan2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_atan_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_atanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_bfloat16_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_cdouble_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_ceil_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_cfloat_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_chalf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_clamp_max_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_clamp_min_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_complex_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_conj_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_conj_physical_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_copysign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_cos_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_cosh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_deg2rad_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_digamma_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_div_floor_rounding_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_div_no_rounding_mode_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_div_trunc_rounding_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_double_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_erf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_erfc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_erfinv_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_exp2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_exp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_expm1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_fill_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_float_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_float_power_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_floor_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_fmax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_fmin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_fmod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_frac_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_frexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_half_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_hypot_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_i0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_ldexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_lgamma_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_linalg_vector_norm_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_log10_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_log1p_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_log2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_log_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_logaddexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_logit_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_masked_amax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_masked_amin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_masked_logsumexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_masked_mean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_masked_norm_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_masked_prod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_masked_std_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_masked_sum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_masked_var_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_max_binary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_maximum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_mean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_min_binary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_minimum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_mul_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_mvlgamma_mvlgamma_p_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_mvlgamma_mvlgamma_p_5_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nan_to_num_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nanmean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nansum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_neg_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_celu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_elu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_hardshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_hardsigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_hardtanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_logsigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_mish_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_prelu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_relu6_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_relu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_rrelu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_selu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_silu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_softplus_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_softshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_softsign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_tanhshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_nn_functional_threshold_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_polar_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_polygamma_polygamma_n_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_polygamma_polygamma_n_1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_polygamma_polygamma_n_2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_polygamma_polygamma_n_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_polygamma_polygamma_n_4_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_positive_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_pow_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_prod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_rad2deg_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_real_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_reciprocal_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_remainder_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_round_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_round_decimals_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_round_decimals_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_round_decimals_neg_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_rsqrt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_rsub_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_sgn_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_sigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_sign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_sin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_sinc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_sinh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_special_entr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_special_erfcx_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_special_i0e_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_special_i1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_special_i1e_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_special_log_ndtr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_special_ndtr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_special_ndtri_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_special_xlog1py_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_sqrt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_square_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_std_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_std_unbiased_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_sub_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_sum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_tan_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_tanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_true_divide_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_trunc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_var_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_var_unbiased_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_backward_xlogy_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward___radd___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward___rdiv___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward___rmod___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward___rmul___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward___rpow___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward___rsub___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_abs_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_acos_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_acosh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_add_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_all_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_amax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_amin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_angle_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_any_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_argmax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_argmin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_asin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_asinh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_atan2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_atan_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_atanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_bfloat16_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_bool_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_byte_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_cdouble_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_ceil_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_cfloat_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_chalf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_char_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_clamp_max_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_clamp_min_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_complex_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_conj_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_conj_physical_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_copysign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_cos_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_cosh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_count_nonzero_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_deg2rad_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_digamma_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_div_floor_rounding_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_div_no_rounding_mode_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_div_trunc_rounding_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_double_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_eq_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_erf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_erfc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_erfinv_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_exp2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_exp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_expm1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_fill_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_float_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_float_power_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_floor_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_floor_divide_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_fmax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_fmin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_fmod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_frac_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_frexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_ge_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_gt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_half_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_heaviside_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_hypot_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_i0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_igamma_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_igammac_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_int_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_isclose_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_isfinite_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_isinf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_isnan_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_isneginf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_isposinf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_isreal_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_jiterator_binary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_jiterator_binary_return_by_ref_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_jiterator_unary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_ldexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_le_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_lgamma_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_linalg_vector_norm_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_log10_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_log1p_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_log2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_log_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_logaddexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_logical_and_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_logical_not_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_logical_or_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_logical_xor_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_logit_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_long_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_lt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_amax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_amin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_argmax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_argmin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_logsumexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_mean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_norm_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_prod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_std_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_sum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_masked_var_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_max_binary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_maximum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_mean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_min_binary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_minimum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_mul_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_mvlgamma_mvlgamma_p_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_mvlgamma_mvlgamma_p_5_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nan_to_num_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nanmean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nansum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_ne_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_neg_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nextafter_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_celu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_elu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_hardshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_hardsigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_hardtanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_logsigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_mish_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_prelu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_relu6_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_relu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_rrelu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_selu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_silu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_softplus_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_softshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_softsign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_tanhshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_nn_functional_threshold_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_polar_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_polygamma_polygamma_n_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_polygamma_polygamma_n_1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_polygamma_polygamma_n_2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_polygamma_polygamma_n_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_polygamma_polygamma_n_4_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_positive_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_pow_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_prod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_rad2deg_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_real_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_reciprocal_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_remainder_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_round_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_round_decimals_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_round_decimals_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_round_decimals_neg_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_rsqrt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_rsub_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_sgn_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_short_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_sigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_sign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_signbit_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_sin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_sinc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_sinh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_airy_ai_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_bessel_j0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_bessel_j1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_bessel_y0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_bessel_y1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_chebyshev_polynomial_t_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_chebyshev_polynomial_u_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_chebyshev_polynomial_v_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_chebyshev_polynomial_w_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_entr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_erfcx_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_hermite_polynomial_h_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_hermite_polynomial_he_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_i0e_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_i1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_i1e_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_laguerre_polynomial_l_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_legendre_polynomial_p_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_log_ndtr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_modified_bessel_i0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_modified_bessel_i1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_modified_bessel_k0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_modified_bessel_k1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_ndtr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_ndtri_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_scaled_modified_bessel_k0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_scaled_modified_bessel_k1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_shifted_chebyshev_polynomial_t_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_shifted_chebyshev_polynomial_u_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_shifted_chebyshev_polynomial_v_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_shifted_chebyshev_polynomial_w_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_spherical_bessel_j0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_xlog1py_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_special_zeta_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_sqrt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_square_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_std_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_std_unbiased_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_sub_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_sum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_tan_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_tanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_true_divide_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_trunc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_var_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_var_unbiased_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_compile_forward_xlogy_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward___radd___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward___rdiv___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward___rmod___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward___rmul___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward___rpow___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward___rsub___cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_abs_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_acos_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_acosh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_add_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_all_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_amax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_amin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_angle_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_any_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_argmax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_argmin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_asin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_asinh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_atan2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_atan_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_atanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_bfloat16_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_bool_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_byte_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_cdouble_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_ceil_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_cfloat_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_chalf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_char_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_clamp_max_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_clamp_min_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_complex_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_conj_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_conj_physical_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_copysign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_cos_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_cosh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_count_nonzero_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_deg2rad_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_digamma_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_div_floor_rounding_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_div_no_rounding_mode_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_div_trunc_rounding_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_double_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_eq_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_erf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_erfc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_erfinv_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_exp2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_exp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_expm1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_fill_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_float_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_float_power_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_floor_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_floor_divide_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_fmax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_fmin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_fmod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_frac_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_frexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_ge_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_gt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_half_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_heaviside_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_hypot_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_i0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_igamma_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_igammac_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_int_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_isclose_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_isfinite_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_isinf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_isnan_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_isneginf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_isposinf_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_isreal_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_jiterator_binary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_jiterator_binary_return_by_ref_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_jiterator_unary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_ldexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_le_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_lgamma_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_linalg_vector_norm_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_log10_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_log1p_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_log2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_log_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_logaddexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_logical_and_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_logical_not_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_logical_or_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_logical_xor_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_logit_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_long_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_lt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_amax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_amin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_argmax_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_argmin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_logsumexp_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_mean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_norm_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_prod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_std_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_sum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_masked_var_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_max_binary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_maximum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_mean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_min_binary_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_minimum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_mul_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_mvlgamma_mvlgamma_p_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_mvlgamma_mvlgamma_p_5_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nan_to_num_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nanmean_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nansum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_ne_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_neg_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nextafter_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_celu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_elu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_hardshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_hardsigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_hardtanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_logsigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_mish_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_prelu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_relu6_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_relu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_rrelu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_selu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_silu_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_softplus_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_softshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_softsign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_tanhshrink_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_nn_functional_threshold_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_polar_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_polygamma_polygamma_n_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_polygamma_polygamma_n_1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_polygamma_polygamma_n_2_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_polygamma_polygamma_n_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_polygamma_polygamma_n_4_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_positive_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_pow_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_prod_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_rad2deg_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_real_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_reciprocal_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_remainder_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_round_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_round_decimals_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_round_decimals_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_round_decimals_neg_3_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_rsqrt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_rsub_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_sgn_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_short_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_sigmoid_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_sign_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_signbit_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_sin_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_sinc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_sinh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_airy_ai_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_bessel_j0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_bessel_j1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_bessel_y0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_bessel_y1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_chebyshev_polynomial_t_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_chebyshev_polynomial_u_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_chebyshev_polynomial_v_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_chebyshev_polynomial_w_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_entr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_erfcx_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_hermite_polynomial_h_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_hermite_polynomial_he_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_i0e_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_i1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_i1e_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_laguerre_polynomial_l_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_legendre_polynomial_p_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_log_ndtr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_modified_bessel_i0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_modified_bessel_i1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_modified_bessel_k0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_modified_bessel_k1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_ndtr_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_ndtri_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_scaled_modified_bessel_k0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_scaled_modified_bessel_k1_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_shifted_chebyshev_polynomial_t_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_shifted_chebyshev_polynomial_u_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_shifted_chebyshev_polynomial_v_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_shifted_chebyshev_polynomial_w_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_spherical_bessel_j0_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_xlog1py_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_special_zeta_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_sqrt_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_square_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_std_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_std_unbiased_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_sub_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_sum_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_tan_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_tanh_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_true_divide_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_trunc_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_var_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_var_unbiased_cuda_float32, test/test_nestedtensor.py::TestNestedTensorOpInfoCUDA::test_forward_xlogy_cuda_float32 2024-08-07T18:48:03.4807952Z 2024-08-07T18:48:07.2623917Z Running inductor/test_torchinductor 3/4 ... [2024-08-07 18:48:07.261840] 2024-08-07T18:48:07.2627722Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_torchinductor.py', '-m', 'not serial', '--shard-id=3', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:48:07.262355] 2024-08-07T18:49:10.9825713Z 2024-08-07T18:49:10.9829983Z test_modules 2/2 was successful, full logs can be found in artifacts with path test/test-reports/test_modules_2.2_07adb4607eb49a41_.log 2024-08-07T18:49:11.0736875Z Running 1809 items in this shard: test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_CELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_ELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_ELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_Hardswish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_Hardswish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_Hardtanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_Hardtanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_LeakyReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_LeakyReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_Mish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_ReLU6_cuda_float64, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_ReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_check_inplace_nn_Threshold_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_AdaptiveAvgPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_AdaptiveAvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_AdaptiveAvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_AdaptiveAvgPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_AdaptiveAvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_AvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_BCELoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_BCEWithLogitsLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_BatchNorm1d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_BatchNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_BatchNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_BatchNorm2d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_BatchNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_BatchNorm3d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_CTCLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_CircularPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_CircularPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_CircularPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_CircularPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ConstantPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ConstantPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ConstantPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Conv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Conv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Conv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Conv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Conv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ConvTranspose1d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ConvTranspose2d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ConvTranspose2d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ConvTranspose3d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_CosineEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_CrossEntropyLoss_cuda_float16, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_CrossEntropyLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Embedding_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_FractionalMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_GELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_GELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_GRUCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_GRU_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_GRU_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_GRU_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_GroupNorm_cuda_bfloat16, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_GroupNorm_cuda_float16, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Hardshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Hardtanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Hardtanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_HingeEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_HuberLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_InstanceNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_InstanceNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_InstanceNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_InstanceNorm2d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_InstanceNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_InstanceNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_InstanceNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_InstanceNorm3d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_L1Loss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LPPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LPPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LPPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LSTMCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LSTM_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LSTM_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LayerNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LayerNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LazyConv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LazyConv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LazyConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LazyConvTranspose2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LazyConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LazyConvTranspose3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LazyConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LeakyReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LeakyReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LocalResponseNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LocalResponseNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LogSigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LogSoftmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_LogSoftmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_MSELoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_MSELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_MarginRankingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_MarginRankingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_MaxPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_MaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_MaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Mish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_MultiLabelMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_MultiheadAttention_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_MultiheadAttention_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_NLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_PReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_PoissonNLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_RMSNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_RNNCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_RNN_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_RNN_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_RNN_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_RNN_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ReLU6_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ReLU6_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ReflectionPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ReflectionPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ReflectionPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ReflectionPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ReplicationPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ReplicationPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ReplicationPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_SELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_SELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_SiLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Sigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_SoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Softmax2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Softmax2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Softmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Softmin_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Softmin_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Softplus_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Tanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Threshold_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Threshold_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_TransformerDecoderLayer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_TransformerEncoderLayer_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_TransformerEncoderLayer_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_TransformerEncoder_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_TransformerEncoder_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Transformer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_Transformer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ZeroPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ZeroPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_cpu_gpu_parity_nn_ZeroPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AdaptiveAvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AdaptiveAvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AdaptiveAvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AdaptiveMaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AdaptiveMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AdaptiveMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AdaptiveMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AvgPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AvgPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_AvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_BCEWithLogitsLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_BCEWithLogitsLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_BatchNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_BatchNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_BatchNorm2d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_BatchNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_BatchNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Bilinear_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_CELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_CTCLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_CircularPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConstantPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConstantPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Conv1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Conv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Conv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Conv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConvTranspose1d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConvTranspose1d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConvTranspose1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConvTranspose2d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConvTranspose2d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConvTranspose2d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConvTranspose3d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_CosineEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_CosineEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_CrossEntropyLoss_cuda_float16, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_CrossEntropyLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_FractionalMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_FractionalMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_GELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_GELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_GLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_GRUCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_GRU_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_GaussianNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_GroupNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Hardshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Hardshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Hardswish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Hardtanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_HingeEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_HuberLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_InstanceNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_InstanceNorm2d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_InstanceNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_InstanceNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_InstanceNorm3d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_L1Loss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LPPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LPPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LPPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LPPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LPPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LSTMCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LSTM_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LSTM_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LayerNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LazyConv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LazyConv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LazyConv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LazyConv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LazyConv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LazyConvTranspose3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LazyConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LeakyReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LeakyReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Linear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LocalResponseNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LogSigmoid_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_LogSoftmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_MarginRankingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_MaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_MaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_MaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Mish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Mish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_MultiLabelSoftMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_MultiMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_MultiMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_MultiheadAttention_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_NLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_RMSNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_RNNCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_RNN_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_RNN_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ReLU6_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ReflectionPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ReflectionPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ReflectionPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ReplicationPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ReplicationPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ReplicationPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ReplicationPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_SELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_SiLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Sigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_SoftMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Softmax2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Softmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Softmin_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Softplus_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Tanhshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Threshold_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_Threshold_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_TransformerDecoderLayer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_TransformerEncoderLayer_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_TransformerEncoder_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ZeroPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ZeroPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ZeroPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_device_ctx_init_nn_ZeroPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_errors_nn_CircularPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_errors_nn_CircularPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_errors_nn_CircularPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_errors_nn_GRUCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_errors_nn_GRU_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_errors_nn_GRU_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_errors_nn_GRU_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_errors_nn_LSTM_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_errors_nn_LSTM_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_errors_nn_RNN_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_errors_nn_RNN_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AdaptiveAvgPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AdaptiveAvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AdaptiveAvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AdaptiveAvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AdaptiveAvgPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AdaptiveAvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AdaptiveMaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AdaptiveMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AdaptiveMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AvgPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_AvgPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_BCELoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_BatchNorm1d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_BatchNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_BatchNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_BatchNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_BatchNorm2d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_BatchNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_BatchNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Bilinear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CTCLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CTCLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CircularPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CircularPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CircularPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CircularPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConstantPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConstantPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConstantPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConstantPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Conv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Conv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Conv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Conv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose1d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose1d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose1d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose2d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose2d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose3d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CosineEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CosineEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CrossEntropyLoss_cuda_float16, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_CrossEntropyLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_FractionalMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_GELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_GRU_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_GRU_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_GRU_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_GroupNorm_cuda_bfloat16, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_GroupNorm_cuda_float16, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_GroupNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Hardswish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Hardtanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Hardtanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_HingeEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_HuberLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_InstanceNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_InstanceNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_InstanceNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_InstanceNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_InstanceNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_InstanceNorm3d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_InstanceNorm3d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_KLDivLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_L1Loss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LPPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LPPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LPPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LPPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LSTM_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LSTM_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LayerNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LayerNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LazyConv1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LazyConv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LazyConv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LazyConv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LazyConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LazyConvTranspose2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LeakyReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LeakyReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Linear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LogSigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_LogSoftmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MarginRankingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MaxPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Mish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MultiLabelMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MultiLabelSoftMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MultiLabelSoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MultiMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MultiMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MultiheadAttention_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_MultiheadAttention_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_PReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_PoissonNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_RNNCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_RNN_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_RNN_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ReflectionPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ReflectionPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ReplicationPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ReplicationPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_SELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_SiLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Sigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Sigmoid_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_SmoothL1Loss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_SmoothL1Loss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_SoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Softmax2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Softmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Softsign_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_Tanhshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_TransformerEncoderLayer_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_TransformerEncoderLayer_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_TransformerEncoder_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ZeroPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ZeroPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ZeroPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_factory_kwargs_nn_ZeroPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_AdaptiveAvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_AdaptiveAvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_AdaptiveAvgPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_AdaptiveMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_AdaptiveMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_AvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_BCELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_BatchNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_BatchNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_BatchNorm2d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_BatchNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_BatchNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Bilinear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_CTCLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_CircularPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_CircularPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_CircularPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_CircularPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_CircularPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ConstantPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ConstantPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ConstantPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ConstantPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Conv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Conv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Conv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ConvTranspose1d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_forward_nn_ConvTranspose1d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ConvTranspose1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ConvTranspose2d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_forward_nn_ConvTranspose3d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_CosineEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_CosineEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Embedding_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Embedding_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_FractionalMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_FractionalMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_FractionalMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_GELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_GELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_GLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_GRUCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_GRU_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_GRU_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_GRU_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_GaussianNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_GaussianNLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_GroupNorm_cuda_float16, test/test_modules.py::TestModuleCUDA::test_forward_nn_GroupNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Hardswish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_HingeEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_HingeEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_InstanceNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_InstanceNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_InstanceNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_InstanceNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_InstanceNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_InstanceNorm3d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_InstanceNorm3d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_KLDivLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_KLDivLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_L1Loss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_LPPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_LPPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_LPPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_LSTMCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_LSTMCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_LSTM_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_LSTM_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_LSTM_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_LayerNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_LazyConv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_LazyConv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_LazyConv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_LazyConvTranspose1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_LazyConvTranspose2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_LazyConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_LeakyReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Linear_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Linear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_LocalResponseNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_LogSigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_LogSigmoid_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_MSELoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_MSELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_MarginRankingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_MaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_MaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Mish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Mish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_MultiLabelMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_MultiLabelMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_MultiLabelSoftMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_MultiMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_MultiMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_MultiheadAttention_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_MultiheadAttention_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_NLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_PReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_PoissonNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_PoissonNLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReLU6_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReflectionPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReflectionPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReflectionPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReflectionPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReflectionPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReplicationPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReplicationPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReplicationPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ReplicationPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Sigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_SmoothL1Loss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Softmax2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Softmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Softplus_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Softplus_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Softshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Softsign_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Tanhshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Tanhshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Threshold_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Threshold_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_TransformerDecoderLayer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_TransformerEncoderLayer_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_TransformerEncoderLayer_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_TransformerEncoderLayer_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_TransformerEncoderLayer_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_TransformerEncoder_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_TransformerEncoder_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_Transformer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_Transformer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ZeroPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ZeroPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ZeroPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_forward_nn_ZeroPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_forward_nn_ZeroPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_grad_nn_AdaptiveAvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_AdaptiveMaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_AvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_BCELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_BCEWithLogitsLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_BatchNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_BatchNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_BatchNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_BatchNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_BatchNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_BatchNorm3d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_CELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_CTCLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_CircularPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_CircularPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_ConstantPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_Conv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_ConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_Embedding_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_FractionalMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_FractionalMaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_GELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_GRUCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_GRU_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_GroupNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_InstanceNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_InstanceNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_KLDivLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_L1Loss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_LPPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_LPPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_LPPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_LSTM_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_LayerNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_LazyConv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_LazyConv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_LazyConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_LogSoftmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_MaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_MaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_Mish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_MultiMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_MultiheadAttention_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_PReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_RNNCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_RNN_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_ReLU6_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_ReplicationPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_Sigmoid_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_SmoothL1Loss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_SoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_Softmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_Softplus_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_Tanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_Tanhshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_TransformerEncoder_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_TransformerEncoder_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_Transformer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_ZeroPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_ZeroPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_grad_nn_ZeroPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_AdaptiveAvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_AdaptiveAvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_AdaptiveMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_AvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_BCELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_BCEWithLogitsLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_BatchNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_BatchNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_CircularPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_CircularPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_ConstantPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_ConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_CosineEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_ELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_Embedding_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_FractionalMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_FractionalMaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_GaussianNLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_Hardshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_HingeEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_InstanceNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_InstanceNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_KLDivLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_LPPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_LPPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_LSTM_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_LayerNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_LazyConv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_LazyConv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_LazyConv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_Linear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_LogSoftmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_MSELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_MultiLabelSoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_MultiheadAttention_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_MultiheadAttention_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_RNN_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_RNN_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_ReLU6_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_ReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_ReplicationPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_SELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_Sigmoid_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_SmoothL1Loss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_Softmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_Softmin_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_Softsign_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_Tanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_TransformerDecoderLayer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_gradgrad_nn_TransformerEncoderLayer_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_AdaptiveAvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_AdaptiveAvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_AdaptiveMaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_AdaptiveMaxPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_AvgPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_AvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_BCELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_BCEWithLogitsLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_BatchNorm1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_BatchNorm2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_BatchNorm3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Bilinear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_CircularPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_CircularPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_CircularPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_CircularPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ConstantPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ConstantPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ConstantPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Conv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Conv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Conv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ConvTranspose1d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ConvTranspose1d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ConvTranspose1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ConvTranspose2d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ConvTranspose2d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ConvTranspose3d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_CosineEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Embedding_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GRUCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GRU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GRU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GaussianNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GaussianNLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GroupNorm_cuda_bfloat16, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GroupNorm_cuda_float16, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_GroupNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_HingeEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_HingeEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_HuberLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_InstanceNorm1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_InstanceNorm1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_InstanceNorm2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_InstanceNorm3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_L1Loss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_L1Loss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LPPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LPPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LPPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LSTMCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LSTM_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LazyConv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LazyConv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LazyConvTranspose1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LazyConvTranspose2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LazyConvTranspose3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LazyConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Linear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_LogSoftmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_MarginRankingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_MaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_MaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Mish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_MultiLabelMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_MultiLabelMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_MultiLabelSoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_MultiMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_MultiMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_MultiheadAttention_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_NLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_PoissonNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_RMSNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_RNNCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_RNN_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_RNN_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ReLU6_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ReLU6_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ReflectionPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ReflectionPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ReplicationPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ReplicationPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ReplicationPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ReplicationPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_SiLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Sigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Sigmoid_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_SmoothL1Loss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_SoftMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Softmax2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Softmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Softmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Softplus_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Softplus_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Softshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Softsign_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Softsign_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Tanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Tanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_Threshold_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_TransformerEncoderLayer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_TransformerEncoderLayer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_TransformerEncoder_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ZeroPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ZeroPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ZeroPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ZeroPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_if_train_and_eval_modes_differ_nn_ZeroPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AdaptiveAvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AdaptiveAvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AdaptiveAvgPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AdaptiveMaxPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AdaptiveMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AdaptiveMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AdaptiveMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AvgPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_AvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_BCELoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_BCELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_BatchNorm2d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_BatchNorm2d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_BatchNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_BatchNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_BatchNorm3d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_BatchNorm3d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Bilinear_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_CELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_CELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_CTCLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_CTCLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_CircularPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_CircularPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_CircularPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_CircularPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_CircularPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConstantPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConstantPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConstantPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConstantPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Conv1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Conv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Conv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Conv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConvTranspose1d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConvTranspose1d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConvTranspose1d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConvTranspose2d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConvTranspose2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConvTranspose3d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConvTranspose3d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ConvTranspose3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_FractionalMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_FractionalMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_FractionalMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_GELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_GELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_GRUCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_GRUCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_GRU_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_GroupNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Hardshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Hardswish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Hardswish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Hardtanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Hardtanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_HingeEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_HuberLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_HuberLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_InstanceNorm1d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_InstanceNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_InstanceNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_InstanceNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_InstanceNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_InstanceNorm3d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_KLDivLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_L1Loss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LPPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LPPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LPPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LSTM_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LazyConv1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LazyConv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LazyConv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LazyConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LazyConvTranspose3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LazyConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LeakyReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LeakyReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LocalResponseNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LogSigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LogSigmoid_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_LogSoftmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MSELoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MarginRankingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Mish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Mish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MultiLabelMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MultiMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MultiMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MultiheadAttention_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MultiheadAttention_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_MultiheadAttention_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_NLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_PReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_PReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_PoissonNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_RMSNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_RNNCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_RNN_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ReLU6_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ReLU6_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ReflectionPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ReplicationPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ReplicationPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ReplicationPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_SELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_SiLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_SoftMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_SoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Softmin_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Softmin_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Softplus_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Softshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Softshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Tanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Tanhshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_TransformerDecoderLayer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_TransformerEncoderLayer_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_TransformerEncoderLayer_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_TransformerEncoderLayer_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_TransformerEncoder_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Transformer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_Transformer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ZeroPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_memory_format_nn_ZeroPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_AdaptiveAvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_AdaptiveAvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_AdaptiveAvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_AdaptiveMaxPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_AdaptiveMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_AdaptiveMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_AdaptiveMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_AdaptiveMaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_AvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_AvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_BCEWithLogitsLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_BatchNorm1d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_BatchNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_BatchNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_BatchNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_BatchNorm2d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_BatchNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_BatchNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_BatchNorm3d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Bilinear_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_CELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_CTCLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_CircularPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_CircularPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ConstantPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ConstantPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Conv1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Conv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Conv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ConvTranspose1d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ConvTranspose1d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ConvTranspose1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ConvTranspose2d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ConvTranspose3d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ConvTranspose3d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_CosineEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_CrossEntropyLoss_cuda_float16, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Embedding_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Embedding_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_GLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_GRUCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_GRUCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_GRU_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_GroupNorm_cuda_bfloat16, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_GroupNorm_cuda_float16, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_GroupNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_GroupNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Hardshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Hardtanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_HuberLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_HuberLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_InstanceNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_InstanceNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_InstanceNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_InstanceNorm3d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_InstanceNorm3d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_KLDivLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_L1Loss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LPPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LPPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LSTMCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LSTM_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LSTM_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LayerNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LazyConv1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LazyConv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LazyConv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LazyConv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LazyConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LazyConvTranspose2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LazyConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LazyConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LeakyReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Linear_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LocalResponseNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LocalResponseNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LogSigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LogSigmoid_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_LogSoftmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MSELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MarginRankingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Mish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MultiLabelMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MultiLabelMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MultiLabelSoftMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MultiLabelSoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MultiMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MultiheadAttention_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MultiheadAttention_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MultiheadAttention_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_MultiheadAttention_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_PReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_RMSNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_RNNCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_RNN_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ReLU6_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ReflectionPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ReflectionPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ReflectionPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ReflectionPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ReflectionPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ReplicationPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ReplicationPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_SELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_SiLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_SoftMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_SoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Softmax2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Softmax2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Softmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Softmax_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Softplus_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Softshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Softsign_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Tanhshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_Threshold_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_TransformerDecoderLayer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_TransformerEncoderLayer_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_TransformerEncoder_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_TransformerEncoder_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ZeroPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_multiple_device_transfer_nn_ZeroPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AdaptiveAvgPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AdaptiveAvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AdaptiveAvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AdaptiveAvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AdaptiveMaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AdaptiveMaxPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AdaptiveMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AdaptiveMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AdaptiveMaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AvgPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_AvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_BCELoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_BCEWithLogitsLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_BCEWithLogitsLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_BatchNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_BatchNorm2d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_BatchNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_BatchNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_CELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_CELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_CTCLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_CircularPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_CircularPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_CircularPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConstantPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConstantPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Conv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Conv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Conv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Conv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConvTranspose1d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConvTranspose1d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConvTranspose1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConvTranspose2d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConvTranspose3d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConvTranspose3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_CosineEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_CosineEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_CrossEntropyLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Embedding_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_FractionalMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GRUCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GRU_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GRU_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GRU_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GRU_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GaussianNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GroupNorm_cuda_float16, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GroupNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_GroupNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Hardshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Hardswish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Hardswish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Hardtanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_HingeEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_HingeEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_HuberLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_HuberLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_InstanceNorm1d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_InstanceNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_InstanceNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_InstanceNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_InstanceNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_InstanceNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_KLDivLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_KLDivLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_L1Loss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_L1Loss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_LPPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_LPPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_LazyConv1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_LazyConv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_LazyConv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_LazyConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_LazyConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_LeakyReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Linear_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Linear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_LogSigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_LogSoftmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MSELoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MSELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MarginRankingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Mish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MultiLabelMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MultiLabelMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MultiLabelSoftMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MultiMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MultiMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MultiheadAttention_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MultiheadAttention_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_MultiheadAttention_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_NLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_PoissonNLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_RNN_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ReLU6_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ReflectionPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ReflectionPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ReflectionPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ReplicationPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ReplicationPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ReplicationPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ReplicationPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_SELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_SELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_SiLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Sigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_SoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Softshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Softsign_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Softsign_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Tanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Tanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Tanhshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_Tanhshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_TransformerDecoderLayer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_TransformerDecoderLayer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_TransformerEncoderLayer_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_TransformerEncoderLayer_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_TransformerEncoder_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_TransformerEncoder_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ZeroPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ZeroPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_non_contiguous_tensors_nn_ZeroPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_AdaptiveAvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_AdaptiveAvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_AdaptiveMaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_AdaptiveMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_AdaptiveMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_AdaptiveMaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_AvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_BCELoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_BatchNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_BatchNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_BatchNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_BatchNorm2d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_BatchNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_BatchNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_BatchNorm3d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Bilinear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_CELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_CELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_CTCLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_CircularPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_CircularPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConstantPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConstantPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConstantPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConstantPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Conv1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Conv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Conv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Conv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Conv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Conv3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConvTranspose1d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConvTranspose1d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConvTranspose2d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConvTranspose2d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConvTranspose2d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConvTranspose2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_CosineEmbeddingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_CosineEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_CrossEntropyLoss_cuda_float16, test/test_modules.py::TestModuleCUDA::test_repr_nn_CrossEntropyLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_ELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Embedding_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_FractionalMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_FractionalMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_FractionalMaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_GELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_GELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_GLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_GRUCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_GRU_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_GRU_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_GRU_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_GaussianNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_GroupNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Hardshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Hardtanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Hardtanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_HuberLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_InstanceNorm1d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_InstanceNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_InstanceNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_InstanceNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_InstanceNorm2d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_InstanceNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_InstanceNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_InstanceNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_InstanceNorm3d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_KLDivLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_KLDivLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_L1Loss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_LPPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_LPPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_LSTMCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_LSTMCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_LSTM_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_LazyConv1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_LazyConv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_LazyConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_LazyConvTranspose2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_LazyConvTranspose3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_LeakyReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_LeakyReLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Linear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_LocalResponseNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_LocalResponseNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_MSELoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_MarginRankingLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_MarginRankingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_MaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_MaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Mish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_MultiLabelMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_MultiMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_MultiheadAttention_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_MultiheadAttention_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_NLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_NLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_PoissonNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_RNNCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_RNN_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_RNN_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_RNN_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_RNN_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ReLU6_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ReflectionPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_ReflectionPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_ReplicationPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ReplicationPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ReplicationPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_SELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_SELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_SiLU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Sigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Sigmoid_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Softmax2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Softmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Softmin_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Softplus_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Softsign_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Tanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Tanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Tanhshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Tanhshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_Threshold_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Threshold_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_TransformerDecoderLayer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_TransformerDecoderLayer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_TransformerEncoderLayer_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_TransformerEncoder_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_Transformer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ZeroPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_repr_nn_ZeroPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ZeroPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_repr_nn_ZeroPad3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_AdaptiveAvgPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_AdaptiveAvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_AdaptiveAvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_AdaptiveMaxPool1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_AdaptiveMaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_AdaptiveMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_AdaptiveMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_AvgPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_AvgPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_AvgPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_BCELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_BCEWithLogitsLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_BCEWithLogitsLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_BatchNorm1d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_BatchNorm1d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_BatchNorm1d_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_BatchNorm1d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_BatchNorm2d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_BatchNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_BatchNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Bilinear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_CTCLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_CTCLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_CircularPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_CircularPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConstantPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConstantPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConstantPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConstantPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Conv3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConvTranspose1d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConvTranspose1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConvTranspose2d_cuda_complex128, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConvTranspose2d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConvTranspose2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConvTranspose3d_cuda_complex32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConvTranspose3d_cuda_complex64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ConvTranspose3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_CrossEntropyLoss_cuda_float16, test/test_modules.py::TestModuleCUDA::test_save_load_nn_CrossEntropyLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_FractionalMaxPool2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_FractionalMaxPool3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_FractionalMaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_GELU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_GELU_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_GLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_GRUCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_GRU_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_GRU_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_GaussianNLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_GroupNorm_cuda_bfloat16, test/test_modules.py::TestModuleCUDA::test_save_load_nn_GroupNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Hardshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Hardswish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Hardswish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Hardtanh_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_HingeEmbeddingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_HuberLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_HuberLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_InstanceNorm1d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_InstanceNorm2d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_InstanceNorm2d_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_InstanceNorm3d_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_InstanceNorm3d_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_KLDivLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LPPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LSTMCell_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LSTM_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LSTM_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LSTM_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LayerNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LazyConv1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LazyConv1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LazyConv2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LazyConv2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LazyConvTranspose2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LazyConvTranspose3d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Linear_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Linear_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LocalResponseNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LocalResponseNorm_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LogSigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_LogSoftmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_MSELoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_MarginRankingLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_MaxPool1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_MaxPool2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_MaxPool3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Mish_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Mish_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_MultiLabelMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_MultiLabelSoftMarginLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_MultiMarginLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_MultiheadAttention_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_MultiheadAttention_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_NLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_PoissonNLLLoss_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_PoissonNLLLoss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_RMSNorm_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_RNNCell_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_RNN_eval_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_RNN_eval_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_RNN_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ReLU6_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ReLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ReflectionPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ReflectionPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ReflectionPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ReplicationPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ReplicationPad2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ReplicationPad3d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_SiLU_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Sigmoid_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Sigmoid_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_SmoothL1Loss_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Softmax2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Softmax2d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Softmax_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Softplus_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Softplus_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Softshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Softsign_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Tanh_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Tanhshrink_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Tanhshrink_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Threshold_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Threshold_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_TransformerDecoderLayer_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_TransformerEncoderLayer_train_mode_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_TransformerEncoder_train_mode_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_Transformer_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ZeroPad1d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ZeroPad1d_cuda_float64, test/test_modules.py::TestModuleCUDA::test_save_load_nn_ZeroPad2d_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_AdaptiveAvgPool1d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_AdaptiveAvgPool2d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_AdaptiveMaxPool1d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_AdaptiveMaxPool2d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_AvgPool1d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_AvgPool2d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_AvgPool2d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_AvgPool3d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_BCELoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_BCELoss_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_BCEWithLogitsLoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_BCEWithLogitsLoss_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_BatchNorm1d_train_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_BatchNorm2d_train_mode_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_BatchNorm3d_eval_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_BatchNorm3d_train_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Bilinear_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_CTCLoss_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_CircularPad1d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_CircularPad1d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_CircularPad2d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_CircularPad3d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ConstantPad1d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ConstantPad1d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ConstantPad2d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Conv1d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Conv3d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Conv3d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ConvTranspose1d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ConvTranspose3d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_CosineEmbeddingLoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_CrossEntropyLoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ELU_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Embedding_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Embedding_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_GRUCell_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_GRU_eval_mode_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_GRU_eval_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_GRU_train_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_GaussianNLLLoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_GaussianNLLLoss_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_GroupNorm_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Hardswish_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Hardtanh_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_HingeEmbeddingLoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_HuberLoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_HuberLoss_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_InstanceNorm1d_train_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_InstanceNorm2d_eval_mode_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_InstanceNorm2d_train_mode_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_InstanceNorm2d_train_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_InstanceNorm3d_eval_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_KLDivLoss_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_L1Loss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_L1Loss_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_LPPool1d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_LPPool2d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_LPPool2d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_LSTMCell_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_LSTM_eval_mode_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_LSTM_train_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_LayerNorm_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_LayerNorm_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Linear_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_LocalResponseNorm_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_LogSigmoid_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_MarginRankingLoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_MaxPool1d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_MaxPool3d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_MaxPool3d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Mish_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_MultiLabelSoftMarginLoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_MultiLabelSoftMarginLoss_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_MultiMarginLoss_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_MultiheadAttention_eval_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_MultiheadAttention_train_mode_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_PReLU_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_PReLU_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_PoissonNLLLoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_PoissonNLLLoss_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_RMSNorm_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_RNNCell_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_RNNCell_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_RNN_eval_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_RNN_train_mode_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ReLU6_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ReLU6_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ReLU_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ReLU_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ReflectionPad1d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ReflectionPad2d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ReflectionPad3d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ReplicationPad2d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_SELU_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_SELU_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_SiLU_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_SoftMarginLoss_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Softmax2d_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Softmax_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Softmin_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Softmin_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Softplus_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Softshrink_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Tanh_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Tanhshrink_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Threshold_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_TransformerDecoderLayer_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_TransformerDecoderLayer_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_TransformerEncoderLayer_train_mode_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_TransformerEncoderLayer_train_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_TransformerEncoder_eval_mode_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Transformer_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_Transformer_swap_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ZeroPad2d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_empty_nn_ZeroPad3d_swap_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AdaptiveAvgPool1d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AdaptiveAvgPool1d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AdaptiveAvgPool2d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AdaptiveAvgPool2d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AdaptiveAvgPool2d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AdaptiveAvgPool3d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AdaptiveMaxPool1d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AdaptiveMaxPool2d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AdaptiveMaxPool3d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AvgPool1d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AvgPool1d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AvgPool2d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_AvgPool3d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BCELoss_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BCELoss_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BCELoss_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BCEWithLogitsLoss_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BatchNorm1d_train_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BatchNorm2d_eval_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BatchNorm2d_train_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BatchNorm2d_train_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BatchNorm3d_eval_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BatchNorm3d_eval_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BatchNorm3d_train_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BatchNorm3d_train_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BatchNorm3d_train_mode_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_BatchNorm3d_train_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Bilinear_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Bilinear_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CELU_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CELU_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CELU_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CTCLoss_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CTCLoss_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CircularPad1d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CircularPad1d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CircularPad1d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CircularPad2d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CircularPad3d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CircularPad3d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CircularPad3d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConstantPad1d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConstantPad1d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConstantPad1d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConstantPad2d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConstantPad2d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConstantPad2d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConstantPad3d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConstantPad3d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConstantPad3d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConstantPad3d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Conv1d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Conv2d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Conv2d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Conv3d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConvTranspose1d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConvTranspose1d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConvTranspose2d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ConvTranspose2d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CosineEmbeddingLoss_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CrossEntropyLoss_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_CrossEntropyLoss_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Embedding_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Embedding_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_FractionalMaxPool2d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_FractionalMaxPool2d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_FractionalMaxPool2d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_FractionalMaxPool3d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_GELU_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_GELU_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_GRUCell_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_GRU_eval_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_GRU_eval_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_GRU_train_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_GRU_train_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_GaussianNLLLoss_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_GroupNorm_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Hardswish_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Hardswish_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Hardtanh_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Hardtanh_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_HingeEmbeddingLoss_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_HuberLoss_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm1d_eval_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm1d_eval_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm1d_eval_mode_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm1d_eval_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm1d_train_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm2d_eval_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm2d_eval_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm2d_eval_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm2d_train_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm3d_eval_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm3d_train_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_InstanceNorm3d_train_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_KLDivLoss_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_KLDivLoss_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_KLDivLoss_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_L1Loss_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_L1Loss_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LPPool1d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LPPool2d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LPPool2d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LPPool3d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LPPool3d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LSTMCell_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LSTMCell_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LSTMCell_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LSTM_eval_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LSTM_eval_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LSTM_train_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LSTM_train_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LayerNorm_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LayerNorm_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LayerNorm_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LeakyReLU_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LeakyReLU_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LeakyReLU_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Linear_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Linear_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Linear_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LocalResponseNorm_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LocalResponseNorm_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LogSigmoid_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LogSigmoid_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LogSoftmax_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_LogSoftmax_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MSELoss_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MarginRankingLoss_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MarginRankingLoss_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MarginRankingLoss_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MaxPool1d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MaxPool1d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MaxPool1d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MaxPool2d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MaxPool2d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MaxPool2d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MaxPool3d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MaxPool3d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MaxPool3d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Mish_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MultiLabelMarginLoss_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MultiMarginLoss_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MultiMarginLoss_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MultiheadAttention_eval_mode_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_MultiheadAttention_train_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_NLLLoss_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_NLLLoss_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_NLLLoss_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_PReLU_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_PReLU_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_PoissonNLLLoss_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_PoissonNLLLoss_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_RMSNorm_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_RMSNorm_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_RNNCell_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_RNNCell_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_RNN_eval_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_RNN_eval_mode_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_RNN_eval_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_RNN_train_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_RNN_train_mode_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReLU6_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReLU6_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReLU6_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReLU_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReLU_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReflectionPad1d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReflectionPad1d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReflectionPad2d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReflectionPad3d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReflectionPad3d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReplicationPad1d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReplicationPad1d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReplicationPad2d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReplicationPad2d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReplicationPad3d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ReplicationPad3d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_SELU_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_SiLU_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_SiLU_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_SiLU_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_SiLU_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Sigmoid_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_SmoothL1Loss_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_SmoothL1Loss_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_SoftMarginLoss_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_SoftMarginLoss_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softmax2d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softmax2d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softmax_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softmax_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softmin_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softmin_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softplus_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softplus_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softplus_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softshrink_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softshrink_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softshrink_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softsign_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Softsign_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Tanh_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Tanh_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Tanh_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Tanhshrink_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Tanhshrink_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Threshold_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Threshold_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Threshold_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_TransformerDecoderLayer_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_TransformerEncoderLayer_eval_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_TransformerEncoderLayer_eval_mode_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_TransformerEncoderLayer_eval_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_TransformerEncoderLayer_train_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_TransformerEncoder_eval_mode_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_TransformerEncoder_train_mode_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_TransformerEncoder_train_mode_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_TransformerEncoder_train_mode_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Transformer_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_Transformer_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ZeroPad1d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ZeroPad1d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ZeroPad2d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ZeroPad2d_swap_False_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ZeroPad2d_swap_True_set_grad_True_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ZeroPad3d_swap_False_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ZeroPad3d_swap_True_set_grad_False_cuda_float32, test/test_modules.py::TestModuleCUDA::test_to_nn_ZeroPad3d_swap_True_set_grad_True_cuda_float32 2024-08-07T18:49:11.1473073Z 2024-08-07T18:49:14.7520549Z Running test_meta 1/5 ... [2024-08-07 18:49:14.751541] 2024-08-07T18:49:14.7524648Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_meta.py', '-m', 'not serial', '--shard-id=1', '--num-shards=5', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:49:14.752002] 2024-08-07T18:58:27.7831247Z 2024-08-07T18:58:27.7835608Z inductor/test_torchinductor 3/4 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_torchinductor_3.4_0f3db564f79be0bd_.log 2024-08-07T18:58:27.7897633Z Running 174 items in this shard: test/inductor/test_torchinductor.py::SweepInputsCpuTest::test_cpu_broadcast1_broadcast1, test/inductor/test_torchinductor.py::SweepInputsCpuTest::test_cpu_broadcast1_broadcast2, test/inductor/test_torchinductor.py::SweepInputsCpuTest::test_cpu_broadcast1_int, test/inductor/test_torchinductor.py::SweepInputsCpuTest::test_cpu_broadcast2_broadcast2, test/inductor/test_torchinductor.py::SweepInputsCpuTest::test_cpu_broadcast2_broadcast3, test/inductor/test_torchinductor.py::SweepInputsCpuTest::test_cpu_broadcast3_strided, test/inductor/test_torchinductor.py::SweepInputsCpuTest::test_cpu_double_broadcast3, test/inductor/test_torchinductor.py::SweepInputsCpuTest::test_cpu_double_transposed, test/inductor/test_torchinductor.py::SweepInputsCpuTest::test_cpu_strided_int, test/inductor/test_torchinductor.py::CpuTests::test__unsafe_masked_index_cpu, test/inductor/test_torchinductor.py::CpuTests::test_abs_cpu, test/inductor/test_torchinductor.py::CpuTests::test_adaptive_avg_pool2d1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_adaptive_max_pool2d2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_add_complex5_cpu, test/inductor/test_torchinductor.py::CpuTests::test_add_complex6_cpu, test/inductor/test_torchinductor.py::CpuTests::test_add_const_int_cpu, test/inductor/test_torchinductor.py::CpuTests::test_add_inplace_permuted_cpu, test/inductor/test_torchinductor.py::CpuTests::test_arange1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_arange5_cpu, test/inductor/test_torchinductor.py::CpuTests::test_argmax_argmin1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_argmax_argmin2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_argmax_to_float_cpu, test/inductor/test_torchinductor.py::CpuTests::test_as_strided_scatter_cpu, test/inductor/test_torchinductor.py::CpuTests::test_avg_pool2d5_cpu, test/inductor/test_torchinductor.py::CpuTests::test_avg_pool2d6_cpu, test/inductor/test_torchinductor.py::CpuTests::test_avg_pool2d_backward3_cpu, test/inductor/test_torchinductor.py::CpuTests::test_avg_pool3d_backward2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_avg_pool3d_backward4_cpu, test/inductor/test_torchinductor.py::CpuTests::test_batch_norm_2d_cpu, test/inductor/test_torchinductor.py::CpuTests::test_both_scalars_cpu, test/inductor/test_torchinductor.py::CpuTests::test_bucketize_cpu, test/inductor/test_torchinductor.py::CpuTests::test_bucketize_int_cpu, test/inductor/test_torchinductor.py::CpuTests::test_buffer_batch_norm_cpu, test/inductor/test_torchinductor.py::CpuTests::test_buffer_copied_in_graph_with_different_shapes_cpu, test/inductor/test_torchinductor.py::CpuTests::test_builtins_round_float_ndigits_neg_cpu, test/inductor/test_torchinductor.py::CpuTests::test_builtins_round_int_ndigits_zero_cpu, test/inductor/test_torchinductor.py::CpuTests::test_cat_negative_dim_cpu, test/inductor/test_torchinductor.py::CpuTests::test_cat_of_loops_and_extern_kernel_cpu, test/inductor/test_torchinductor.py::CpuTests::test_cat_unbacked_legacy_empty_cpu, test/inductor/test_torchinductor.py::CpuTests::test_cat_upcasting_cpu, test/inductor/test_torchinductor.py::CpuTests::test_cauchy_cpu, test/inductor/test_torchinductor.py::CpuTests::test_config_option_dont_assume_alignment_cpu, test/inductor/test_torchinductor.py::CpuTests::test_consecutive_split_cumsum_cpu, test/inductor/test_torchinductor.py::CpuTests::test_const_int32_to_float_cpu, test/inductor/test_torchinductor.py::CpuTests::test_constant_pad_2d_cpu, test/inductor/test_torchinductor.py::CpuTests::test_conv_with_as_strided_cpu, test/inductor/test_torchinductor.py::CpuTests::test_convolution3_cpu, test/inductor/test_torchinductor.py::CpuTests::test_convolution4_cpu, test/inductor/test_torchinductor.py::CpuTests::test_cumsum_no_mask_cpu, test/inductor/test_torchinductor.py::CpuTests::test_custom_op_3_cpu, test/inductor/test_torchinductor.py::CpuTests::test_div1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_div2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_div4_cpu, test/inductor/test_torchinductor.py::CpuTests::test_div5_cpu, test/inductor/test_torchinductor.py::CpuTests::test_div_prim_cpu, test/inductor/test_torchinductor.py::CpuTests::test_dropout_cpu, test/inductor/test_torchinductor.py::CpuTests::test_dropout_trivial_0_cpu, test/inductor/test_torchinductor.py::CpuTests::test_dropout_trivial_1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_embedding_bag_cpu, test/inductor/test_torchinductor.py::CpuTests::test_empty2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_erfc_cpu, test/inductor/test_torchinductor.py::CpuTests::test_fallback_mutable_op_with_return_cpu, test/inductor/test_torchinductor.py::CpuTests::test_fft_real_input_cpu, test/inductor/test_torchinductor.py::CpuTests::test_fill1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_float_index_expression_cpu, test/inductor/test_torchinductor.py::CpuTests::test_float_index_expression_type_promotion_cpu, test/inductor/test_torchinductor.py::CpuTests::test_forced_buffer_realize_cpu, test/inductor/test_torchinductor.py::CpuTests::test_full_boolean_cpu, test/inductor/test_torchinductor.py::CpuTests::test_fuse_tiled_cpu, test/inductor/test_torchinductor.py::CpuTests::test_fusing_write_into_disjoint_read_cpu, test/inductor/test_torchinductor.py::CpuTests::test_gather3_cpu, test/inductor/test_torchinductor.py::CpuTests::test_gelu_cpu, test/inductor/test_torchinductor.py::CpuTests::test_generate_rand_fp8_cpu, test/inductor/test_torchinductor.py::CpuTests::test_hardswish_cpu, test/inductor/test_torchinductor.py::CpuTests::test_horizonal_fusion2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_index3_cpu, test/inductor/test_torchinductor.py::CpuTests::test_index_dynamic_shapes_cpu, test/inductor/test_torchinductor.py::CpuTests::test_index_propagation_device_assert_masked_cpu, test/inductor/test_torchinductor.py::CpuTests::test_index_propagation_flip_cpu, test/inductor/test_torchinductor.py::CpuTests::test_index_propagation_floordiv_cpu, test/inductor/test_torchinductor.py::CpuTests::test_index_put1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_index_put4_cpu, test/inductor/test_torchinductor.py::CpuTests::test_index_put_failed_reinplace_cpu, test/inductor/test_torchinductor.py::CpuTests::test_index_put_fallback2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_index_select_cpu, test/inductor/test_torchinductor.py::CpuTests::test_inductor_assert_cpu, test/inductor/test_torchinductor.py::CpuTests::test_inplace_activations_cpu, test/inductor/test_torchinductor.py::CpuTests::test_inplace_add_cpu, test/inductor/test_torchinductor.py::CpuTests::test_input_mutation2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_issue102546_cpu, test/inductor/test_torchinductor.py::CpuTests::test_l1_loss_cpu, test/inductor/test_torchinductor.py::CpuTests::test_large_broadcast_reduction_cpu, test/inductor/test_torchinductor.py::CpuTests::test_lgamma_cpu, test/inductor/test_torchinductor.py::CpuTests::test_like_rands2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_linspace3_cpu, test/inductor/test_torchinductor.py::CpuTests::test_masked_scatter_cpu, test/inductor/test_torchinductor.py::CpuTests::test_max_pool2d1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_max_pool2d2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_max_pool2d6_cpu, test/inductor/test_torchinductor.py::CpuTests::test_mean_cpu, test/inductor/test_torchinductor.py::CpuTests::test_mixed_mm2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_multi_gpu_recompile_on_index_cpu, test/inductor/test_torchinductor.py::CpuTests::test_multi_threading_cpu, test/inductor/test_torchinductor.py::CpuTests::test_mutations_loop_fusion_cpu, test/inductor/test_torchinductor.py::CpuTests::test_neg_index_cpu, test/inductor/test_torchinductor.py::CpuTests::test_new_empty_strided_cpu, test/inductor/test_torchinductor.py::CpuTests::test_output_strides_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pad_cast_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pixel_shuffle_channels_last_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_bessel_j1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_bessel_y1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_chebyshev_polynomial_u_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_digamma_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_gammainc_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_gammaln_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_legendre_polynomial_p_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_log_ndtr_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_logit_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_modified_bessel_i1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_modified_bessel_k0_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_round_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_scaled_modified_bessel_k0_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_shifted_chebyshev_polynomial_v_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pointwise_xlog1py_cpu, test/inductor/test_torchinductor.py::CpuTests::test_polar_cpu, test/inductor/test_torchinductor.py::CpuTests::test_pow3_cpu, test/inductor/test_torchinductor.py::CpuTests::test_rand_like_deterministic_cpu, test/inductor/test_torchinductor.py::CpuTests::test_randn_generator_cpu, test/inductor/test_torchinductor.py::CpuTests::test_reduction2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_reduction3_cpu, test/inductor/test_torchinductor.py::CpuTests::test_reduction4_cpu, test/inductor/test_torchinductor.py::CpuTests::test_reflection_pad2d_backward_cpu, test/inductor/test_torchinductor.py::CpuTests::test_reinterpret_dtypeview_cpu, test/inductor/test_torchinductor.py::CpuTests::test_relu_cpu, test/inductor/test_torchinductor.py::CpuTests::test_remove_no_ops_cpu, test/inductor/test_torchinductor.py::CpuTests::test_remove_noop_clone_cpu, test/inductor/test_torchinductor.py::CpuTests::test_resize_cpu, test/inductor/test_torchinductor.py::CpuTests::test_roll_cpu, test/inductor/test_torchinductor.py::CpuTests::test_round_cpu, test/inductor/test_torchinductor.py::CpuTests::test_rsqrt_cpu, test/inductor/test_torchinductor.py::CpuTests::test_rsqrt_dynamic_shapes_cpu, test/inductor/test_torchinductor.py::CpuTests::test_scalar_output_cpu, test/inductor/test_torchinductor.py::CpuTests::test_scatter_bf16_cpu, test/inductor/test_torchinductor.py::CpuTests::test_scatter_reduce1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_scheduler_vertical_fusion1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_sdpa_unaligned_mask_cpu, test/inductor/test_torchinductor.py::CpuTests::test_sdpa_use_block_ptr_False_cpu, test/inductor/test_torchinductor.py::CpuTests::test_setitem_with_int_parameter_cpu, test/inductor/test_torchinductor.py::CpuTests::test_shape_padding_cpu, test/inductor/test_torchinductor.py::CpuTests::test_shape_prop_torch_ones_cpu, test/inductor/test_torchinductor.py::CpuTests::test_slice2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_slice_mutation1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_slice_scatter2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_sort_stable_cpu, test/inductor/test_torchinductor.py::CpuTests::test_split_cumsum_cpu, test/inductor/test_torchinductor.py::CpuTests::test_split_cumsum_low_prec_cpu, test/inductor/test_torchinductor.py::CpuTests::test_split_with_sizes_with_unbacked_symints_cpu, test/inductor/test_torchinductor.py::CpuTests::test_sqrt_dynamic_shapes_cpu, test/inductor/test_torchinductor.py::CpuTests::test_squeeze1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_squeeze2_cpu, test/inductor/test_torchinductor.py::CpuTests::test_squeeze_varargs_cpu, test/inductor/test_torchinductor.py::CpuTests::test_stack_cpu, test/inductor/test_torchinductor.py::CpuTests::test_tensor1_cpu, test/inductor/test_torchinductor.py::CpuTests::test_to_device_constant_cpu, test/inductor/test_torchinductor.py::CpuTests::test_to_dtype_cpu, test/inductor/test_torchinductor.py::CpuTests::test_transpose_add_cpu, test/inductor/test_torchinductor.py::CpuTests::test_unspec_inputs_cpu, test/inductor/test_torchinductor.py::CpuTests::test_upsample_nearest2d_cpu, test/inductor/test_torchinductor.py::CpuTests::test_upsample_nearest3d_cpu, test/inductor/test_torchinductor.py::CpuTests::test_var_correction_cpu, test/inductor/test_torchinductor.py::CpuTests::test_vectorized_ops_masked_var_novec_cpu, test/inductor/test_torchinductor.py::CpuTests::test_view_as_complex_cpu, test/inductor/test_torchinductor.py::CpuTests::test_views1_cpu, test/inductor/test_torchinductor.py::TestFull::test_full_dtype 2024-08-07T18:58:27.7957438Z 2024-08-07T18:58:31.6130350Z Running test_meta 5/5 ... [2024-08-07 18:58:31.612541] 2024-08-07T18:58:31.6135484Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_meta.py', '-m', 'not serial', '--shard-id=5', '--num-shards=5', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:58:31.613059] 2024-08-07T18:59:06.2418254Z 2024-08-07T18:59:06.2421461Z test_meta 1/5 was successful, full logs can be found in artifacts with path test/test-reports/test_meta_1.5_1dc589540d194270_.log 2024-08-07T18:59:06.5582514Z Running 7827 items in this shard: test/test_meta.py::TestMetaConverter::test_channels_last_leaf, test/test_meta.py::TestMetaConverter::test_view_of_view_of_leaf, test/test_meta.py::TestMetaCUDA::test_batch_norm_backward_output_mask1_cuda, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype___rpow___cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs__conversions_complex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs__conversions_polar_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_clamp_min_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_copysign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_floor_divide_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_fmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_fmod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_gt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_igammac_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_le_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_logaddexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_true_divide_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_eq_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_heaviside_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_jiterator_binary_return_by_ref_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_le_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_lt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_H_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_H_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_T_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_T_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___getitem___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___getitem___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___radd___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___radd___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rand___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rand___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rdiv___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rdiv___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rmatmul___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rmod___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rmod___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rmul___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rmul___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rmul___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rpow___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rpow___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rpow___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rsub___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rsub___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__batch_norm_with_update_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__chunk_cat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_abs_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_abs_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_abs_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_acos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_acos_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_acos_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_acos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcdiv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcdiv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcdiv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcdiv_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcdiv_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcdiv_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcmul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcmul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcmul_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_asin_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_asin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_asin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_atan_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_atan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_ceil_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_ceil_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_clamp_max_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_clamp_min_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cos_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cos_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cos_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cos_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cos_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cosh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cosh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_div_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_erf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_erf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_erf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_erfc_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_erfc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_exp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_exp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_exp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_exp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_expm1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_floor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_frac_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_frac_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_lerp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_lerp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_lgamma_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log10_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log10_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log10_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log10_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log1p_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log1p_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_max_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_max_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_max_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_max_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_maximum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_maximum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_maximum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_minimum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_minimum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_mul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_mul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_mul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_mul_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_neg_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_neg_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_neg_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_norm_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_pow_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_pow_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_pow_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_reciprocal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_reciprocal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_round_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sigmoid_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sigmoid_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sigmoid_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sign_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sign_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sinh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sqrt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sub_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sub_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sub_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_tan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_tanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_tanh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_tanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_trunc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_zero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_zero_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_zero_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__segment_reduce_offsets_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__segment_reduce_offsets_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__softmax_backward_data_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__unsafe_masked_index_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__unsafe_masked_index_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__unsafe_masked_index_put_accumulate_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__unsafe_masked_index_put_accumulate_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__unsafe_masked_index_put_accumulate_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__upsample_bilinear2d_aa_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_abs_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_abs_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_abs_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_acos_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_acos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_add_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_add_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addbmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addbmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addbmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addcdiv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addcdiv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addcdiv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addcmul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addcmul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addmv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_alias_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_allclose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_amax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_amin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_amin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_aminmax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_aminmax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_angle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_angle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_angle_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_any_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_arange_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_arange_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_arange_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argmin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argmin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argwhere_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_partial_views_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_partial_views_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_partial_views_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_partial_views_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_partial_views_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_scatter_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asin_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asinh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asinh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asinh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atan2_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atan2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atan2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atan2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atanh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atanh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_1d_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_1d_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_1d_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_2d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_3d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bfloat16_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bfloat16_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bfloat16_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bfloat16_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_left_shift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_left_shift_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_not_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_not_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_xor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_xor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_block_diag_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_block_diag_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bool_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bool_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bool_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_broadcast_tensors_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_broadcast_tensors_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_broadcast_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_broadcast_tensors_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_broadcast_to_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_broadcast_to_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bucketize_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bucketize_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_byte_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_byte_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cartesian_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cartesian_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cartesian_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cat_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cauchy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cauchy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cdouble_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cdouble_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cdouble_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cdouble_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ceil_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ceil_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ceil_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cfloat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_chalf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_chalf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_chalf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_char_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_char_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_char_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_char_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cholesky_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cholesky_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cholesky_inverse_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_chunk_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_chunk_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_chunk_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_max_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_max_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_min_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clone_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clone_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clone_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_column_stack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_combinations_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_combinations_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_physical_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_constant_pad_nd_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_constant_pad_nd_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_contiguous_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_corrcoef_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_corrcoef_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_corrcoef_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_corrcoef_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cos_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cosh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cosh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_count_nonzero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_count_nonzero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_count_nonzero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_count_nonzero_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cov_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cross_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cross_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cross_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cummax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cummin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cummin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cummin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumprod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumsum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumulative_trapezoid_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumulative_trapezoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumulative_trapezoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumulative_trapezoid_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumulative_trapezoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumulative_trapezoid_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_deg2rad_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diag_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diag_embed_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diag_embed_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diag_embed_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diag_embed_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diag_embed_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagflat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagflat_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagflat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagflat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagflat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diff_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diff_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_digamma_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_digamma_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_digamma_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dist_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_floor_rounding_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_no_rounding_mode_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_no_rounding_mode_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_trunc_rounding_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_trunc_rounding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dot_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_double_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_double_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_double_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dsplit_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dsplit_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dsplit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dstack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dstack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_einsum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_permuted_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_permuted_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_permuted_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_permuted_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_strided_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_strided_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_eq_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_eq_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_equal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_equal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erfc_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erfc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erfc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erfinv_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erfinv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erfinv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp2_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_as_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expm1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expm1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expm1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exponential_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_eye_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_eye_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fft2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fftshift_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fftshift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fftshift_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfft_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifftn_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifftshift_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfft2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_rfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_rfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_rfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fill_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flatten_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flip_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flip_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fliplr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fliplr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fliplr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flipud_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_power_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_power_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_floor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_floor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_floor_divide_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_floor_divide_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_frac_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_frexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gather_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gather_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gather_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ge_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ge_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_geometric_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_geometric_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_geqrf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_geqrf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gradient_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gradient_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gradient_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_grid_sampler_2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_grid_sampler_2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gt_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_half_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_half_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_half_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_half_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_heaviside_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_heaviside_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_heaviside_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_heaviside_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_histc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_hsplit_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_hsplit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_hstack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_hypot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_i0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_i0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_i0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_add_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_fill_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_fill_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_amin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_amin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_amin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_select_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_select_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_select_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_select_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_inner_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_int_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_int_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_int_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isclose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isclose_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isclose_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isclose_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isfinite_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isfinite_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isfinite_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isinf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isinf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isneginf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isneginf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isneginf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isposinf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isposinf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isreal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isreal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_item_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_item_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_item_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_2inputs_2outputs_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_2inputs_2outputs_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_2inputs_2outputs_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_4inputs_with_extra_args_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_4inputs_with_extra_args_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_binary_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_binary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_binary_return_by_ref_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_binary_return_by_ref_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_binary_return_by_ref_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_unary_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_unary_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_unary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_unary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_unary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_kron_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_kron_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_kron_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_kron_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_kthvalue_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_kthvalue_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_kthvalue_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_kthvalue_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lcm_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ldexp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_le_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_le_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_le_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cholesky_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cholesky_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cholesky_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cond_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cross_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cross_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_diagonal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_diagonal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_eigvals_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_eigvals_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_householder_product_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_inv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_inv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_inv_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_inv_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_ldl_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_lu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_matrix_power_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_multi_dot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_multi_dot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_norm_subgradients_at_zero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_norm_subgradients_at_zero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_pinv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_pinv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_pinv_hermitian_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_pinv_hermitian_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_pinv_hermitian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_qr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_slogdet_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_solve_ex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_solve_triangular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_solve_triangular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_svd_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_svd_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_tensorinv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_tensorsolve_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_vander_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_vander_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_vander_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_vector_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linspace_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linspace_tensor_overload_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log10_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log1p_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log1p_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log1p_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_normal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_softmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_softmax_with_dtype_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_softmax_with_dtype_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_softmax_with_dtype_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logdet_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_and_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_not_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_not_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_not_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_not_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_or_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_or_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_or_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_xor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logspace_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logspace_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logspace_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logspace_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logsumexp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_long_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_long_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_long_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mH_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mH_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mH_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mT_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mT_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mT_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_amax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_amin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_amin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_argmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_argmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_argmin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumsum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumsum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumsum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_fill_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_fill_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_fill_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_log_softmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_log_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_logsumexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_mean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_median_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_median_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_normalize_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_prod_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_select_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_softmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_std_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_std_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_std_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_sum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_sum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_var_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_var_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_matmul_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_binary_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_pool2d_with_indices_backward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_reduction_no_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_reduction_no_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_reduction_with_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_reduction_with_dim_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_maximum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_maximum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_maximum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_maximum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_maximum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_median_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_median_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_median_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_list_of_tensors_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_list_of_tensors_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_list_of_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_list_of_tensors_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_list_of_tensors_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_variadic_tensors_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_variadic_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_variadic_tensors_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_variadic_tensors_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_min_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_min_binary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_min_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_min_binary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_min_reduction_no_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_min_reduction_with_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_min_reduction_with_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_minimum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_movedim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_movedim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_movedim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_msort_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mul_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mul_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_multinomial_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_multinomial_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mvlgamma_mvlgamma_p_1_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mvlgamma_mvlgamma_p_1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mvlgamma_mvlgamma_p_1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mvlgamma_mvlgamma_p_5_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mvlgamma_mvlgamma_p_5_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mvlgamma_mvlgamma_p_5_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nan_to_num_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nan_to_num_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nanmean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nanmedian_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nanmedian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nanmedian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nansum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nansum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_narrow_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_narrow_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_narrow_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_narrow_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_native_dropout_backward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_native_dropout_backward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ne_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_empty_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_empty_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_empty_strided_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_empty_strided_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_empty_strided_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_full_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_full_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_full_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_full_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_full_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_ones_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_ones_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_zeros_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_zeros_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nextafter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_adaptive_avg_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_adaptive_max_pool1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_alpha_dropout_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_batch_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_binary_cross_entropy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_celu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_channel_shuffle_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_channel_shuffle_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_channel_shuffle_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv1d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv2d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv_transpose1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv_transpose1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv_transpose3d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv_transpose3d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_cosine_embedding_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_cosine_similarity_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_cross_entropy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_ctc_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_elu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_elu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_embedding_bag_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_embedding_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_feature_alpha_dropout_with_train_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_fractional_max_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_gaussian_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_gelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_group_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hardshrink_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hardsigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hardtanh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hardtanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hinge_embedding_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_huber_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_area_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_bicubic_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_bicubic_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_bilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_linear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_nearest-exact_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_trilinear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_kl_div_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_leaky_relu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_leaky_relu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_linear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_linear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_local_response_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_local_response_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_local_response_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_logsigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_margin_ranking_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_margin_ranking_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_unpool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_unpool2d_grad_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_unpool2d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_unpool3d_grad_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_unpool3d_grad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_mish_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multi_head_attention_forward_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multi_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multi_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multilabel_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multilabel_soft_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multilabel_soft_margin_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_nll_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_normalize_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_normalize_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_circular_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_circular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_circular_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_circular_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_constant_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_reflect_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_reflect_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_replicate_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_replicate_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_replicate_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_replicate_negative_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_replicate_negative_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pairwise_distance_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pixel_shuffle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pixel_unshuffle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pixel_unshuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_poisson_nll_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_prelu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_relu6_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_relu_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_rms_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_silu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_smooth_l1_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_soft_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softmin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softmin_with_dtype_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softmin_with_dtype_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softplus_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softplus_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softsign_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softsign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_tanhshrink_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_threshold_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_threshold_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_threshold_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_triplet_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_triplet_margin_loss_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_triplet_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_triplet_margin_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_triplet_margin_with_distance_loss_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_unfold_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_unfold_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_upsample_bilinear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nonzero_static_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nonzero_static_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_fro_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_inf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_inf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_nuc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_normal_in_place_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_normal_in_place_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_normal_number_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ones_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ones_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ones_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ones_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ones_like_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_outer_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_outer_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_outer_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_0_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_1_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_3_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_4_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_4_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_positive_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_positive_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_positive_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_pow_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_prod_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_put_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_put_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rad2deg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rad2deg_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rad2deg_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randn_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ravel_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_real_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_real_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reciprocal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reciprocal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reciprocal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_remainder_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_interleave_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_interleave_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_interleave_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_interleave_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_interleave_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_as_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_as_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_as_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resize__cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resize__cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resolve_conj_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resolve_conj_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resolve_conj_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resolve_neg_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resolve_neg_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resolve_neg_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resolve_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_roll_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_roll_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_roll_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_roll_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rot90_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rot90_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_round_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_round_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_round_decimals_neg_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsqrt_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsqrt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsqrt_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsub_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsub_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scalar_tensor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scalar_tensor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scalar_tensor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_amax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_amin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_amin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_mean_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_mean_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_prod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_sum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_sum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_searchsorted_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sgn_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sgn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sgn_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sgn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sgn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_short_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_short_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sigmoid_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sigmoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sign_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sign_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sign_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_bartlett_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_exponential_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_exponential_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_general_hamming_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_hamming_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_kaiser_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_nuttall_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_nuttall_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signbit_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signbit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sin_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinc_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_slice_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_slice_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_slice_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_slice_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_softmax_with_dtype_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_softmax_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_softmax_with_dtype_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_softmax_with_dtype_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_softmax_with_dtype_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_softmax_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sparse_mm_reduce_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sparse_sampled_addmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_airy_ai_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_airy_ai_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_airy_ai_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_airy_ai_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_j0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_j0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_j0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_j1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_j1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_j1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_j1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_j1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_y0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_y1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_y1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_t_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_u_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_u_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_w_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_w_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_w_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_w_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_w_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_entr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_entr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_entr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_entr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_erfcx_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_erfcx_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_hermite_polynomial_h_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_hermite_polynomial_h_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_hermite_polynomial_he_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_hermite_polynomial_he_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_i0e_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_i0e_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_i1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_i1e_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_i1e_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_laguerre_polynomial_l_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_laguerre_polynomial_l_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_legendre_polynomial_p_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_legendre_polynomial_p_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_log_ndtr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_log_ndtr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_log_ndtr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_i0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_i1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_i1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_k0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_k0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_k0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_k1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_k1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_scaled_modified_bessel_k0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_scaled_modified_bessel_k0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_scaled_modified_bessel_k1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_t_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_t_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_spherical_bessel_j0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_spherical_bessel_j0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_xlog1py_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_zeta_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_list_args_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_list_args_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_list_args_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_square_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_squeeze_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_squeeze_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_squeeze_multiple_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_squeeze_multiple_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_stack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_stack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_stack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_std_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sum_to_size_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_along_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_along_dim_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tan_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tanh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tensor_split_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tensor_split_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tensordot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tensordot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tile_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tile_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tile_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tile_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_sparse_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_topk_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trace_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_transpose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_transpose_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_transpose_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_transpose_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trapezoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trapz_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trapz_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tril_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tril_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tril_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_triu_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_triu_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_triu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_triu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_triu_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_triu_indices_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_triu_indices_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_true_divide_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_true_divide_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_true_divide_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_true_divide_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trunc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unbind_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unbind_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unbind_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unbind_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unflatten_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unflatten_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unflatten_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unfold_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unfold_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unfold_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unfold_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unfold_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unique_consecutive_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unique_consecutive_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unique_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unique_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unravel_index_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_chunk_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_chunk_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_chunk_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_split_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsqueeze_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsqueeze_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_var_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_var_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_var_mean_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_var_mean_unbiased_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_var_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_as_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_as_real_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_as_real_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_vsplit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_vsplit_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_vsplit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_vstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_vstack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_vstack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_where_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_where_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_where_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_where_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_xlogy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_xlogy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zero__cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zero__cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zero__cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zeros_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zeros_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zeros_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zeros_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zeros_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_T_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_T_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_T_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_T_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___getitem___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___getitem___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___radd___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rand___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rand___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rdiv___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rmatmul___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rmatmul___cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rmod___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rmul___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___ror___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___ror___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rpow___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rpow___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rpow___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rpow___cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rpow___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rpow___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rsub___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rxor___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__batch_norm_with_update_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__chunk_cat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__chunk_cat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__chunk_cat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_abs_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_abs_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_acos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_acos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_acos_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_addcmul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_asin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_asin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_asin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_atan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_atan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_atan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_ceil_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_ceil_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_ceil_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_ceil_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_clamp_max_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_clamp_max_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_clamp_max_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_cos_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_cosh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_cosh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_div_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_div_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erf_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erfc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_exp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_exp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_expm1_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_frac_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lerp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lerp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lerp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lerp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lerp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lerp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lerp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lgamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lgamma_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log10_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log10_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log1p_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log1p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_max_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_max_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_max_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_max_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_max_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_maximum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_minimum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_minimum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_minimum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_minimum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_minimum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_minimum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_minimum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_mul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_mul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_neg_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_neg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_pow_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_pow_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_reciprocal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_reciprocal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sigmoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sigmoid_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sigmoid_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sign_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sinh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sinh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sinh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sinh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sqrt_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sqrt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sqrt_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_tan_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_trunc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_zero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_zero_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_zero_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_zero_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__segment_reduce_lengths_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__softmax_backward_data_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__unsafe_masked_index_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__unsafe_masked_index_put_accumulate_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__unsafe_masked_index_put_accumulate_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__unsafe_masked_index_put_accumulate_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__unsafe_masked_index_put_accumulate_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__upsample_bilinear2d_aa_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__upsample_bilinear2d_aa_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_abs_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_abs_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_acos_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_acos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_acos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_acosh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_acosh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_add_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addbmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addbmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addcdiv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addcdiv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addcmul_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_alias_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_all_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_all_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_all_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_allclose_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_aminmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_aminmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_angle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_angle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_angle_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_angle_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_any_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_arange_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argmin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argsort_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argsort_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_partial_views_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_partial_views_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_partial_views_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_asin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_asinh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atanh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_1d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_1d_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_3d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bfloat16_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bfloat16_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_and_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_and_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_left_shift_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_not_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_or_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_or_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_right_shift_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_xor_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_block_diag_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_block_diag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_block_diag_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bool_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bool_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_broadcast_to_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_broadcast_to_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bucketize_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bucketize_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_byte_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_byte_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_byte_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cartesian_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cdist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cdouble_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ceil_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ceil_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ceil_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cfloat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cfloat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cfloat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cfloat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_chalf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_chalf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_char_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_char_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_char_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cholesky_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cholesky_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cholesky_inverse_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cholesky_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_chunk_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_max_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_min_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_min_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_min_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clone_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clone_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clone_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_column_stack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_column_stack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_column_stack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_column_stack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_combinations_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_physical_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_physical_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_physical_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_physical_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_physical_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_constant_pad_nd_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_constant_pad_nd_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_constant_pad_nd_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_contiguous_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_contiguous_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_contiguous_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_copysign_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_copysign_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_copysign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_copysign_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_corrcoef_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_corrcoef_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_count_nonzero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_count_nonzero_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cov_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cross_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cross_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cross_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cummax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cummax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cummax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cummax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cummin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cumsum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cumsum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cumulative_trapezoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cumulative_trapezoid_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_deg2rad_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_deg2rad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_deg2rad_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_deg2rad_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diag_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diag_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diag_embed_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diag_embed_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagflat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagflat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagflat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diff_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_digamma_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_floor_rounding_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_floor_rounding_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_floor_rounding_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_trunc_rounding_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dot_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_double_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_double_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dstack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dstack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dstack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_like_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_permuted_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_permuted_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_strided_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_strided_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eq_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_equal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_equal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_equal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erfc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erfinv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erfinv_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_as_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expm1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expm1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expm1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eye_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eye_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fftshift_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfft_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfftn_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfftn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifftshift_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifftshift_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifftshift_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ihfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ihfft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ihfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ihfft_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ihfftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_rfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_rfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_rfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_rfft_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_rfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fill_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flatten_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flatten_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flip_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flip_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flip_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flip_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flip_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fliplr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flipud_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flipud_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flipud_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flipud_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_float_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_float_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_float_power_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_float_power_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_floor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_floor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_floor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_floor_divide_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_floor_divide_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_frexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_full_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_full_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gather_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gcd_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ge_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_geometric_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_geometric_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_geqrf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_geqrf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gradient_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gradient_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gradient_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_grid_sampler_2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_grid_sampler_2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_half_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_half_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_half_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_half_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_heaviside_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_heaviside_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_histc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hsplit_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hstack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hstack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_i0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_i0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_i0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_igammac_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_add_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_fill_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_amax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_amax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_amin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_mean_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_mean_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_select_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_select_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_select_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_inner_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_inner_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_int_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_int_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_int_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isclose_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isclose_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isfinite_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isfinite_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isfinite_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isinf_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isinf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isinf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isneginf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isposinf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isposinf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isreal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isreal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_item_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_item_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_2inputs_2outputs_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_2inputs_2outputs_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_4inputs_with_extra_args_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_4inputs_with_extra_args_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_4inputs_with_extra_args_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_return_by_ref_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_return_by_ref_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_return_by_ref_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_return_by_ref_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_return_by_ref_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_unary_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_unary_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_kron_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_kron_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_kthvalue_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_kthvalue_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_le_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_le_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_le_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_le_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lgamma_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lgamma_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lgamma_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_cholesky_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_cholesky_ex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_cross_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_cross_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_cross_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_det_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_det_singular_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_diagonal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_diagonal_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_diagonal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_diagonal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_diagonal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_diagonal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_eig_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_eigh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_eigh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_eigvals_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_eigvals_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_eigvalsh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_householder_product_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_inv_ex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_ldl_factor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_lstsq_grad_oriented_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_lu_factor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_lu_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_lu_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_matrix_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_matrix_power_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_matrix_rank_hermitian_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_matrix_rank_hermitian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_multi_dot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_multi_dot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_norm_subgradients_at_zero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_pinv_hermitian_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_pinv_hermitian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_pinv_singular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_qr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_qr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_svdvals_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_tensorinv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_tensorsolve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_tensorsolve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_vander_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_vander_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_vander_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_vecdot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_vector_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linspace_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linspace_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linspace_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linspace_tensor_overload_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linspace_tensor_overload_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linspace_tensor_overload_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log10_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log1p_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log1p_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_normal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_softmax_with_dtype_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_softmax_with_dtype_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_softmax_with_dtype_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logaddexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logcumsumexp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logcumsumexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logdet_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logdet_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_not_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_not_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_not_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_or_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_or_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_or_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_xor_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_tensor_overload_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_tensor_overload_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_tensor_overload_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_tensor_overload_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_tensor_overload_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logsumexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_long_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_long_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_long_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_long_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_long_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lt_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lu_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lu_unpack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lu_unpack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mH_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mH_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mH_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mH_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mT_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mT_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mT_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mT_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_amax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_amin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_amin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_argmin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_argmin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_cumprod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_cumprod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_cumprod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_cumprod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_cumprod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_cumsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_fill_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_log_softmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_logsumexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_logsumexp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_mean_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_mean_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_normalize_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_normalize_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_normalize_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_prod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_select_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_softmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_sum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_sum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_var_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_var_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_var_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_matmul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_matrix_exp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_binary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_pool2d_with_indices_backward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_no_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_no_dim_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_no_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_with_dim_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_maximum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_maximum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_maximum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_median_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_meshgrid_list_of_tensors_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_meshgrid_list_of_tensors_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_meshgrid_list_of_tensors_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_meshgrid_list_of_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_meshgrid_variadic_tensors_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_min_binary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_min_reduction_no_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_min_reduction_with_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_minimum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_minimum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_minimum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mode_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_movedim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_msort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mul_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nan_to_num_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nanmean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nanmedian_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nansum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_narrow_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_native_dropout_backward_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_native_dropout_backward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_native_dropout_backward_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ne_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_neg_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_neg_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_neg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_neg_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_empty_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_empty_strided_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_empty_strided_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_empty_strided_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_empty_strided_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_empty_strided_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_full_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_full_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_full_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_full_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_full_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_full_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_full_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_ones_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_zeros_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_zeros_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nextafter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_adaptive_avg_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_adaptive_avg_pool3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_adaptive_max_pool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_adaptive_max_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_alpha_dropout_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_avg_pool1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_avg_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_avg_pool2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_avg_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_avg_pool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_batch_norm_without_cudnn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_binary_cross_entropy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_binary_cross_entropy_with_logits_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_celu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_celu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_channel_shuffle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_channel_shuffle_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_channel_shuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_channel_shuffle_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv1d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv2d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv_transpose1d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv_transpose1d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv_transpose1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv_transpose1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv_transpose2d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv_transpose3d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv_transpose3d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_cosine_embedding_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_cosine_similarity_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_ctc_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_dropout3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_dropout_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_embedding_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_embedding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_feature_alpha_dropout_with_train_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_feature_alpha_dropout_with_train_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_fractional_max_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_fractional_max_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_gaussian_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_gaussian_nll_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_gelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_glu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_grid_sample_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_group_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_hardshrink_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_hardsigmoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_hardswish_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_hardtanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_hinge_embedding_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_huber_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_huber_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_bicubic_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_bicubic_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_bilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_bilinear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_bilinear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_nearest-exact_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_trilinear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_trilinear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_l1_loss_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_leaky_relu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_linear_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_local_response_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_logsigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_margin_ranking_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_max_pool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_max_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_max_unpool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_mish_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_multi_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_multilabel_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_multilabel_soft_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_normalize_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_circular_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_circular_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_circular_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_circular_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_constant_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_constant_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_constant_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_constant_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_reflect_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_reflect_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_reflect_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_replicate_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_replicate_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_replicate_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_replicate_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_replicate_negative_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pairwise_distance_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pairwise_distance_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pairwise_distance_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pdist_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pixel_shuffle_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pixel_unshuffle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_poisson_nll_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_poisson_nll_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_prelu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_relu6_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_relu6_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_relu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_relu_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_relu_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_rms_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_rms_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_rms_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_rrelu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_selu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_soft_margin_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_softshrink_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_softsign_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_softsign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_tanhshrink_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_tanhshrink_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_threshold_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_threshold_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_triplet_margin_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_triplet_margin_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_unfold_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_upsample_nearest_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_upsample_nearest_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nonzero_static_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nonzero_static_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nonzero_static_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nonzero_static_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_norm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_norm_fro_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_norm_inf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_norm_inf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_norm_nuc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_normal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_normal_in_place_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_normal_in_place_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_like_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ormqr_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_outer_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_outer_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_outer_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_pca_lowrank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_pinverse_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_pinverse_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_pinverse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polar_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_3_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_4_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_positive_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_positive_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_pow_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_prod_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rad2deg_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rad2deg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rad2deg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rand_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rand_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randint_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randint_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randint_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randint_like_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randint_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randn_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randn_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randn_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randn_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_real_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_real_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_real_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reciprocal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reciprocal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_remainder_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_renorm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_interleave_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reshape_as_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reshape_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reshape_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reshape_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reshape_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resize_as__cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resize_as__cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resolve_neg_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resolve_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resolve_neg_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_roll_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_roll_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_roll_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rot90_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rot90_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rot90_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rot90_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_round_decimals_0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rsqrt_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rsqrt_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rsqrt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rsqrt_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rsqrt_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rsub_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rsub_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scalar_tensor_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scalar_tensor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scalar_tensor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_amax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_searchsorted_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_searchsorted_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sgn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_short_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sigmoid_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sigmoid_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sigmoid_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sign_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_signal_windows_blackman_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_signal_windows_cosine_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_signal_windows_hann_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_slice_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_slice_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_slice_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_softmax_with_dtype_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_softmax_with_dtype_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_softmax_with_dtype_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_softmax_with_dtype_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sort_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sort_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sparse_mm_reduce_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sparse_sampled_addmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sparse_sampled_addmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_j0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_j0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_j0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_j1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_y0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_y0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_y0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_y1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_u_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_w_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_entr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_erfcx_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_hermite_polynomial_h_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_hermite_polynomial_h_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_i0e_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_i0e_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_i0e_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_i1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_i1e_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_i1e_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_laguerre_polynomial_l_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_laguerre_polynomial_l_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_laguerre_polynomial_l_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_legendre_polynomial_p_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_log_ndtr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_i0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_i0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_i0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_i0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_i1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_k0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_k0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_k0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_k1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_ndtr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_ndtr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_ndtr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_scaled_modified_bessel_k0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_scaled_modified_bessel_k1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_scaled_modified_bessel_k1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_scaled_modified_bessel_k1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_t_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_t_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_t_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_u_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_u_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_v_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_v_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_w_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_spherical_bessel_j0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_xlog1py_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_zeta_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_list_args_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_list_args_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_list_args_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_list_args_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_with_sizes_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_with_sizes_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_with_sizes_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sqrt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_squeeze_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_squeeze_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_squeeze_multiple_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_stack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_stack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_stack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_stack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_std_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_std_mean_unbiased_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sum_to_size_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sum_to_size_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sum_to_size_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_svd_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_svd_lowrank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_svd_lowrank_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tanh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tensor_split_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tensor_split_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tensor_split_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tensor_split_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tile_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tile_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tile_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tile_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_to_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_to_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_to_sparse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_to_sparse_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_to_sparse_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_torch__scaled_mm_cuda_float8_e4m3fn, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_torch_ops_aten__efficient_attention_forward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_torch_ops_aten__flash_attention_forward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trace_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trace_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trace_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trace_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_transpose_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_transpose_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trapezoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trapezoid_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trapezoid_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trapz_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_triangular_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tril_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tril_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tril_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tril_indices_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_triu_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_triu_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_triu_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_true_divide_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_true_divide_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_true_divide_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_true_divide_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_true_divide_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_true_divide_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trunc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trunc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trunc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trunc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unbind_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unbind_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unbind_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unflatten_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unflatten_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unflatten_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unfold_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unfold_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_uniform_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unique_consecutive_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unique_consecutive_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unique_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsafe_chunk_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsafe_chunk_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsafe_chunk_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsafe_split_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsqueeze_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsqueeze_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsqueeze_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsqueeze_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsqueeze_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsqueeze_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsqueeze_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_mean_unbiased_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_unbiased_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_unbiased_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_unbiased_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vdot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_as_complex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_as_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_as_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vsplit_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vsplit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vstack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vstack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_where_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_where_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_where_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_xlogy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_xlogy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zero__cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zero__cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zero__cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zero__cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zeros_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zeros_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zeros_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zeros_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_H_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_H_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_T_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_T_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_T_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___getitem___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___getitem___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___getitem___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___radd___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___radd___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___radd___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rand___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rdiv___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rdiv___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rdiv___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rdiv___cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rdiv___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmod___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmod___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmod___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmul___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmul___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rpow___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rpow___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rpow___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rpow___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rsub___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rsub___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__chunk_cat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__chunk_cat_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__chunk_cat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__chunk_cat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__chunk_cat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_abs_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_acos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_acos_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_acos_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_acos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_acos_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_acos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcdiv_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcdiv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcdiv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcdiv_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcmul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_atan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_atan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_atan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_ceil_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_ceil_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_ceil_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_ceil_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_clamp_max_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_clamp_max_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_clamp_min_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_clamp_min_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_clamp_min_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_clamp_min_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cos_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cos_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cosh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cosh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cosh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cosh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_div_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_div_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_erf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_erfc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_erfc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_expm1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_expm1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_floor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_floor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_floor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_frac_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_lerp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_lerp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_lerp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_lerp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log10_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log1p_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log1p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log1p_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log1p_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log1p_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_max_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_max_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_max_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_max_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_maximum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_maximum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_maximum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_maximum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_minimum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_minimum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_minimum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_mul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_mul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_neg_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_neg_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_norm_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_norm_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_pow_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_pow_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_pow_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_reciprocal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_reciprocal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_reciprocal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_reciprocal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_round_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_round_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_round_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sigmoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sigmoid_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sigmoid_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sign_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sinh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sqrt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sqrt_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sqrt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sqrt_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sqrt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sub_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sub_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sub_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sub_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sub_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_tan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_tan_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_tanh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_zero_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_zero_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__native_batch_norm_legit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__native_batch_norm_legit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__segment_reduce_lengths_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__unsafe_masked_index_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__unsafe_masked_index_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__unsafe_masked_index_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__unsafe_masked_index_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_abs_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_abs_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_abs_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_acosh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addcdiv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addcmul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addcmul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addmm_decomposed_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addmv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addmv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_alias_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_alias_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_alias_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides___rand___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__chunk_cat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_abs_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_acos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_addcdiv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_ceil_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_cos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_div_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_log2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_log_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_pow_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_sinh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__segment_reduce_offsets_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__unsafe_masked_index_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_abs_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_addcdiv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_all_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_allclose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_amin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_arange_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_as_strided_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_as_strided_partial_views_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_asin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_atleast_1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_char_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_column_stack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_combinations_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_conj_physical_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_corrcoef_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_cosh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_count_nonzero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_cumulative_trapezoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_diagonal_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_diagonal_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_einsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_erf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_expand_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_fft_hfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_fft_rfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_fmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_frac_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_geqrf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_gradient_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_heaviside_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_index_put_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_isfinite_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_isinf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_istft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_kron_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_ldexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_le_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_lu_factor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_lu_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_multi_dot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_pinv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_pinv_hermitian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_tensorsolve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_vecdot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_log_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_log_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_masked_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_masked_cumprod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_masked_normalize_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_max_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_multinomial_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_mv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_mvlgamma_mvlgamma_p_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nanquantile_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_native_layer_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_new_zeros_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_adaptive_max_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_avg_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_cross_entropy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_elu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_huber_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_leaky_relu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_multi_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_nll_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_one_hot_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_pad_circular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_pad_replicate_negative_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_rrelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_soft_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_softmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_upsample_bilinear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_pca_lowrank_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_polygamma_polygamma_n_4_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_ravel_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_reciprocal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_reshape_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_resolve_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_roll_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_rsub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_scatter_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_signal_windows_blackman_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_signal_windows_general_cosine_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_signal_windows_general_hamming_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_signal_windows_hann_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_signal_windows_kaiser_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_sin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_softmax_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_sort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_special_hermite_polynomial_he_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_special_scaled_modified_bessel_k1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_special_shifted_chebyshev_polynomial_v_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_special_xlog1py_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_square_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_sum_to_size_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_t_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_tan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_transpose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_trapezoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_trapz_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_uniform_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_unsafe_chunk_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_view_as_complex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_view_as_real_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_vsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_vstack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_zeros_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_allclose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_amin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_aminmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_aminmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_aminmax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_angle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_any_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_any_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_arange_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_arange_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argmin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argmin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argsort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argsort_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argsort_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argwhere_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argwhere_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_partial_views_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_partial_views_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_partial_views_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asinh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asinh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asinh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atan2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atanh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_1d_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_2d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_2d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bfloat16_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bfloat16_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bfloat16_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bfloat16_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_not_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_or_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_or_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_xor_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_block_diag_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_block_diag_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_block_diag_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_block_diag_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_block_diag_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_block_diag_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bool_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bool_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bool_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_broadcast_tensors_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_broadcast_tensors_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_broadcast_tensors_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_broadcast_tensors_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bucketize_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_byte_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cartesian_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cartesian_prod_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cartesian_prod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cartesian_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cartesian_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cauchy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cdouble_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cdouble_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ceil_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ceil_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ceil_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cfloat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cfloat_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cfloat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cfloat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_chalf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_char_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_char_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cholesky_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cholesky_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cholesky_inverse_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cholesky_solve_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_chunk_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_chunk_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_chunk_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_max_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_min_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_min_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clone_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clone_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clone_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_column_stack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_column_stack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_column_stack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_column_stack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_column_stack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_combinations_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_combinations_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_complex_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_complex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_conj_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_conj_physical_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_conj_physical_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_conj_physical_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_constant_pad_nd_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_constant_pad_nd_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_constant_pad_nd_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_contiguous_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_contiguous_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_contiguous_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_copysign_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_copysign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_corrcoef_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_corrcoef_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cos_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cos_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_count_nonzero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_count_nonzero_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_count_nonzero_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cov_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cross_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cross_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cummax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cummax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumprod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumprod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumsum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumsum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumsum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumulative_trapezoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_deg2rad_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_deg2rad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_deg2rad_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_deg2rad_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_embed_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_embed_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_embed_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagflat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diff_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diff_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_digamma_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_div_no_rounding_mode_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_div_trunc_rounding_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_div_trunc_rounding_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_double_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_double_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_double_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dsplit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dsplit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dstack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_permuted_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_permuted_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_permuted_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_permuted_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_strided_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_strided_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_equal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_equal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_erfc_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_erfc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_erfinv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_erfinv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_erfinv_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exp2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exp_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_as_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_as_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_as_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expm1_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_eye_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_eye_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_eye_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fft2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fft2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftshift_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftshift_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftshift_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftshift_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftshift_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftshift_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftshift_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftshift_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftshift_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ihfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ihfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ihfftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfft_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfft_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfftn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fill_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_flatten_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_flatten_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_flip_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_flip_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fliplr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fliplr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fliplr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_float_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_float_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_float_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_float_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_float_power_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_float_power_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_floor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_floor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_floor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_floor_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_floor_divide_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_floor_divide_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_floor_divide_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_floor_divide_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fmin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fmin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fmod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fmod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_gather_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_geometric_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_gradient_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_gradient_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_gt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_half_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_heaviside_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_hsplit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_hstack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_hstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_hstack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_hstack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_hypot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_i0_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_i0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_igammac_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_imag_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_add_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_add_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_fill_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_fill_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_fill_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_fill_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_put_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_amax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_amin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_mean_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_select_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_select_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_select_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_select_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_select_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_inner_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_inner_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_int_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_int_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_int_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isclose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isclose_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isclose_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isclose_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isfinite_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isfinite_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isinf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isnan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isneginf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isneginf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isposinf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isposinf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isposinf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isreal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isreal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isreal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isreal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_item_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_item_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_item_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_item_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_item_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_item_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_2inputs_2outputs_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_4inputs_with_extra_args_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_binary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_binary_return_by_ref_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_binary_return_by_ref_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_unary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_unary_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_kron_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_kron_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_kthvalue_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lcm_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ldexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ldexp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_le_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_le_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lgamma_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lgamma_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lgamma_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cholesky_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cross_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cross_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cross_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_det_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_det_singular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_diagonal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_diagonal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_eig_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_eigh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_eigh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_eigvals_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_eigvals_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_inv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_inv_ex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_ldl_factor_ex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_ldl_factor_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lstsq_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lstsq_grad_oriented_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lstsq_grad_oriented_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lu_factor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lu_factor_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lu_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_power_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_rank_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_rank_hermitian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_multi_dot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_norm_subgradients_at_zero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_pinv_hermitian_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_pinv_singular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_qr_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_qr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_solve_triangular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_svd_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_tensorsolve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_vander_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_vecdot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linspace_tensor_overload_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linspace_tensor_overload_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linspace_tensor_overload_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log10_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log10_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log1p_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log1p_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log_softmax_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logaddexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logcumsumexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logcumsumexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_and_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_not_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_not_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_not_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_or_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_or_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_or_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_xor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_xor_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_xor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logspace_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logspace_tensor_overload_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logspace_tensor_overload_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logspace_tensor_overload_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logsumexp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_long_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_long_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_long_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_long_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lu_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lu_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lu_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lu_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lu_unpack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mH_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mT_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mT_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mT_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_amax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_amax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_argmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_argmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_argmin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_argmin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_cumprod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_cumsum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_cumsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_cumsum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_log_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_logaddexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_logsumexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_logsumexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_median_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_normalize_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_select_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_softmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_std_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_std_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_sum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_var_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_var_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_var_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_var_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_matmul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_matmul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_matrix_exp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_binary_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_pool2d_with_indices_backward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_reduction_no_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_reduction_no_dim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_reduction_no_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_reduction_with_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_maximum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_maximum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_maximum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_median_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_median_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_list_of_tensors_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_variadic_tensors_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_variadic_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_variadic_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_variadic_tensors_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_reduction_no_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_reduction_with_dim_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_reduction_with_dim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_minimum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_minimum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mode_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mode_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_movedim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_msort_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_msort_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mul_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_multinomial_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_1_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_5_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_5_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_5_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nan_to_num_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nan_to_num_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nanmean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nanmean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nanmean_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nanmean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nanmedian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nanmedian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nanmedian_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nanmedian_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nanquantile_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nansum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nansum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nansum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_narrow_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_narrow_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_narrow_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_narrow_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_narrow_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_narrow_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_narrow_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_narrow_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_native_batch_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_native_layer_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ne_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ne_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ne_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_neg_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_neg_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_empty_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_empty_strided_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_full_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_ones_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_ones_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_ones_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_zeros_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_zeros_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nextafter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nextafter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_adaptive_avg_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_adaptive_avg_pool2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_adaptive_avg_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_adaptive_avg_pool2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_adaptive_avg_pool3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_adaptive_max_pool1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_adaptive_max_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_adaptive_max_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_avg_pool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_avg_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_batch_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_bilinear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_bilinear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_binary_cross_entropy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_binary_cross_entropy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_celu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_channel_shuffle_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_channel_shuffle_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_channel_shuffle_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_channel_shuffle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_channel_shuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose3d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_cosine_embedding_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_cosine_embedding_loss_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_cosine_embedding_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_cosine_embedding_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_cosine_embedding_loss_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_cosine_similarity_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_ctc_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_dropout2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_dropout2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_dropout3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_dropout_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_feature_alpha_dropout_with_train_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_fractional_max_pool3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_fractional_max_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_gelu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_gelu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_grid_sample_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_group_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_hardshrink_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_hardswish_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_hardswish_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_hardtanh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_hardtanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_hinge_embedding_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_instance_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_interpolate_bilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_interpolate_nearest_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_interpolate_trilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_interpolate_trilinear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_linear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_linear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_local_response_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_logsigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_logsigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_margin_ranking_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_margin_ranking_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_pool3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_unpool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_unpool1d_grad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_unpool2d_grad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_mish_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_mse_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_mse_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_multi_head_attention_forward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_multi_head_attention_forward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_multi_head_attention_forward_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_multi_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_multi_margin_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_multi_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_nll_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_circular_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_constant_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_constant_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_reflect_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_reflect_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_replicate_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_replicate_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_replicate_negative_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_replicate_negative_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_replicate_negative_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_replicate_negative_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pairwise_distance_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pairwise_distance_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pairwise_distance_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pairwise_distance_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pdist_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pixel_shuffle_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pixel_unshuffle_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_prelu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_prelu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_prelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_prelu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_relu_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_relu_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_rms_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_rms_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_rrelu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_scaled_dot_product_attention_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_silu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_silu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_smooth_l1_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_soft_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_softmin_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_softshrink_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_softsign_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_softsign_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_softsign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_softsign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_softsign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_tanhshrink_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_tanhshrink_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_threshold_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_threshold_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_triplet_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_triplet_margin_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_triplet_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_triplet_margin_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_triplet_margin_loss_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_triplet_margin_with_distance_loss_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_triplet_margin_with_distance_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_triplet_margin_with_distance_loss_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_unfold_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_unfold_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_upsample_nearest_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_upsample_nearest_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nonzero_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nonzero_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nonzero_static_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nonzero_static_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_norm_fro_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_norm_fro_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_norm_inf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_norm_inf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_norm_inf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_norm_nuc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_normal_in_place_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_normal_in_place_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_normal_number_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ones_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ones_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ones_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ones_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ones_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ones_like_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ormqr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_outer_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_outer_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_outer_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_outer_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_pca_lowrank_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_permute_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_permute_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_permute_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_3_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_3_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_4_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_4_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_positive_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_prod_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_put_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_put_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_put_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_quantile_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rad2deg_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rad2deg_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rad2deg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randint_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randint_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randint_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randint_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randn_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randn_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randn_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randn_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randn_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ravel_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ravel_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ravel_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ravel_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_real_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_real_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reciprocal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reciprocal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_remainder_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_remainder_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_remainder_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_remainder_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_repeat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_repeat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_as_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resize__cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resize__cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resize__cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resize__cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resize_as__cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resolve_conj_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resolve_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_roll_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_roll_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rot90_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rot90_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rot90_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_round_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_round_decimals_3_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rsqrt_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rsqrt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rsqrt_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rsub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scalar_tensor_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scalar_tensor_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scalar_tensor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scalar_tensor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_amax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_amin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_sum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_searchsorted_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_searchsorted_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_searchsorted_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_searchsorted_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_select_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_select_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_select_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_select_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_select_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sgn_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sgn_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_short_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sigmoid_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sigmoid_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sigmoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signal_windows_bartlett_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signal_windows_gaussian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signal_windows_hann_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signbit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signbit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signbit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signbit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sin_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sin_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinc_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_slice_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_slice_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sort_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sparse_mm_reduce_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sparse_sampled_addmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sparse_sampled_addmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_airy_ai_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_airy_ai_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_airy_ai_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_y1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_y1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_t_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_t_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_t_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_u_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_u_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_u_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_v_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_v_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_v_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_v_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_v_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_w_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_erfcx_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_erfcx_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_hermite_polynomial_h_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_hermite_polynomial_h_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_hermite_polynomial_h_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_hermite_polynomial_he_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_hermite_polynomial_he_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_laguerre_polynomial_l_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_legendre_polynomial_p_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_log_ndtr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_log_ndtr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_i1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_k0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_k0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_k1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_k1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_k1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_k1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_ndtri_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_ndtri_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_ndtri_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_scaled_modified_bessel_k1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_v_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_v_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_spherical_bessel_j0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_spherical_bessel_j0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_spherical_bessel_j0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_list_args_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_list_args_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_list_args_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_list_args_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sqrt_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_square_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_squeeze_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_squeeze_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_squeeze_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_squeeze_multiple_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_squeeze_multiple_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_stack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_stack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_stack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_std_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_std_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_std_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_std_mean_unbiased_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_std_unbiased_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_std_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_std_unbiased_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sub_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sub_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sub_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sub_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sub_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sub_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_to_size_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_to_size_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_to_size_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_to_size_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_to_size_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_t_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_t_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_t_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_t_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_take_along_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_take_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_take_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_take_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_take_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_take_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tanh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tanh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tanh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tensor_split_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tensor_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tensor_split_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tensordot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tensordot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tensordot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tile_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tile_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_sparse_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_sparse_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_topk_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_torch__scaled_mm_cuda_float8_e4m3fn, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_transpose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_transpose_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trapezoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trapezoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trapz_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trapz_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_triangular_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tril_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tril_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tril_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tril_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_triu_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_triu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_triu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_triu_indices_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_true_divide_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_true_divide_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unbind_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unbind_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unbind_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unbind_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unflatten_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unflatten_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unflatten_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unflatten_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unique_consecutive_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unique_consecutive_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unique_consecutive_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unique_cuda_uint32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unique_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unravel_index_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_chunk_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_chunk_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_split_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_split_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsqueeze_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsqueeze_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsqueeze_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsqueeze_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsqueeze_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_var_mean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_var_mean_unbiased_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_var_mean_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_var_mean_unbiased_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vdot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_as_complex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_as_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_as_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_as_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vsplit_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vsplit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vstack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vstack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vstack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_xlogy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_xlogy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_xlogy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zero__cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zero__cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zero__cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zero__cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zeros_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zeros_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zeros_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_H_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_T_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___getitem___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___getitem___cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___getitem___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___radd___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___radd___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___radd___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___radd___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___radd___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rand___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rand___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rand___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmatmul___cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmod___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmod___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmul___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmul___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___ror___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___ror___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rpow___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rpow___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rsub___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rxor___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rxor___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__batch_norm_with_update_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__chunk_cat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__chunk_cat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_abs_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_abs_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_abs_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_acos_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_acos_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_acos_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_addcdiv_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_addcdiv_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_addcmul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_addcmul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_addcmul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_addcmul_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_asin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_asin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_asin_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_asin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_atan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_atan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_ceil_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_clamp_max_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_clamp_min_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_clamp_min_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_clamp_min_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cos_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cos_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cosh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cosh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cosh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_div_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_div_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_erf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_erfc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_erfc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_erfc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_exp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_expm1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_expm1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_expm1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_floor_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_floor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_floor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lerp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lerp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lerp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lerp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lerp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lgamma_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lgamma_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lgamma_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lgamma_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log10_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log1p_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log1p_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log1p_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log2_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_max_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_max_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_max_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_maximum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_maximum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_minimum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_minimum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_minimum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_minimum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_mul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_mul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_mul_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_neg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_norm_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_pow_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_pow_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_reciprocal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_reciprocal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_reciprocal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_reciprocal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_reciprocal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_round_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_round_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_round_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_round_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sigmoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sigmoid_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sign_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sinh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sinh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sinh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sinh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sinh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sinh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sqrt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sqrt_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sqrt_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sqrt_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sub_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_tan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_tan_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_tanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_tanh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_trunc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_trunc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_trunc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_trunc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_zero_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_zero_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_zero_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__native_batch_norm_legit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__segment_reduce_offsets_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__segment_reduce_offsets_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__unsafe_masked_index_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__unsafe_masked_index_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__unsafe_masked_index_put_accumulate_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__unsafe_masked_index_put_accumulate_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__upsample_bilinear2d_aa_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_abs_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_abs_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acos_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acos_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acos_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acosh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_add_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addcdiv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addcdiv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addcmul_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addcmul_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addcmul_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addcmul_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addmm_decomposed_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addmv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addmv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_alias_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_alias_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_alias_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides___rdiv___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides___rxor___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__chunk_cat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_erfc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_log10_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_max_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_mul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_reciprocal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_sigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_sin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_tanh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__unsafe_masked_index_put_accumulate_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_acosh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_aminmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_argmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_argsort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_as_strided_partial_views_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_as_strided_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_atleast_3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_cfloat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_chalf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_cholesky_inverse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_clamp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_corrcoef_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_cos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_cosh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_count_nonzero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_cov_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_cumulative_trapezoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_deg2rad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_diagonal_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_diff_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_div_floor_rounding_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_dot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_dsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_einsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_empty_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_empty_strided_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fft_fft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fft_fft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fft_ihfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fft_ihfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fft_irfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fft_irfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fft_rfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_flatten_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_float_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_gcd_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_gt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_half_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_isclose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_isreal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_ldexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_cholesky_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_det_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_det_singular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_eigvals_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_lu_factor_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_svdvals_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_vector_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_log_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_log_normal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_logit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_logspace_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_lu_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_masked_logaddexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_masked_logsumexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_masked_select_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_matmul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_msort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_mv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_native_layer_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_avg_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_avg_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_binary_cross_entropy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_conv2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_ctc_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_feature_alpha_dropout_with_train_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_fractional_max_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_gelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_hardsigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_interpolate_linear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_interpolate_nearest-exact_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_local_response_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_max_unpool3d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_multi_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_one_hot_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_pad_reflect_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_pad_replicate_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_pdist_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_pixel_unshuffle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_relu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_rrelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_smooth_l1_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_norm_inf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_ormqr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_pinverse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_polygamma_polygamma_n_0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_polygamma_polygamma_n_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_real_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_resize__cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_resolve_conj_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_resolve_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_round_decimals_0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_select_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_sgn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_signal_windows_gaussian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_signal_windows_general_cosine_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_signal_windows_hamming_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_signal_windows_kaiser_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_sin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_sparse_mm_reduce_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_entr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_i1e_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_laguerre_polynomial_l_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_ndtri_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_xlog1py_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_split_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_split_with_sizes_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_square_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_sub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_sum_to_size_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_trace_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_transpose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_trapz_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_triu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_unsqueeze_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_vdot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_view_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_angle_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_angle_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_angle_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_any_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_any_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_arange_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argmax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argmax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argmin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argsort_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argsort_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argwhere_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argwhere_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argwhere_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_partial_views_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_partial_views_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_asinh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_asinh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atan2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atan2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atanh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atanh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_1d_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_1d_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_2d_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_3d_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_3d_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_baddbmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_baddbmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bernoulli_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bernoulli_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bfloat16_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bfloat16_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bfloat16_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bincount_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bincount_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bincount_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_and_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_left_shift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_not_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_not_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_not_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_xor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_block_diag_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_block_diag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_block_diag_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_block_diag_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bool_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bool_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bool_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_tensors_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_tensors_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_to_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_to_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bucketize_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_byte_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_byte_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cartesian_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cdist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cdouble_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cdouble_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cdouble_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cdouble_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cdouble_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cdouble_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ceil_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ceil_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ceil_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ceil_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cfloat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_chalf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_chalf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cholesky_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cholesky_inverse_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cholesky_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_chunk_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_chunk_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_max_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_max_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_max_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_min_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_min_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_min_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_min_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_min_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clone_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_column_stack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_column_stack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_combinations_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_combinations_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_combinations_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_physical_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_physical_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_physical_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_constant_pad_nd_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_contiguous_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_contiguous_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_contiguous_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_contiguous_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_copysign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_copysign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_corrcoef_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_corrcoef_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_corrcoef_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cosh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cosh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_count_nonzero_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_count_nonzero_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cov_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cov_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cross_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cross_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cross_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cummax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cummax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cummax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cummin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cummin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cummin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cumprod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cumsum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cumsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cumsum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_deg2rad_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_embed_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_embed_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagflat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diff_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diff_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_digamma_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_floor_rounding_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_floor_rounding_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_floor_rounding_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_no_rounding_mode_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_no_rounding_mode_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_no_rounding_mode_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_no_rounding_mode_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_trunc_rounding_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_trunc_rounding_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_double_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_double_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_double_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_double_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dsplit_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dsplit_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dsplit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dsplit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dstack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dstack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_einsum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_einsum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_like_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_permuted_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_permuted_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_permuted_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_permuted_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_eq_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_eq_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_eq_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_eq_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_equal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_equal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erfc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erfc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erfc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erfinv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erfinv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erfinv_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exp2_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exp2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exp_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_as_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_eye_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_eye_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fftshift_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fftshift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfft2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifftshift_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifftshift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifftshift_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfftn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_irfft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_irfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_irfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_rfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_rfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_rfft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_rfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_rfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fill_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fill_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fill_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flatten_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flatten_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flip_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flip_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flip_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flip_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flip_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flip_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fliplr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fliplr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flipud_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flipud_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flipud_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_float_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_float_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_float_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_float_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_float_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_float_power_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_float_power_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_floor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_floor_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_floor_divide_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_frac_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gather_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gather_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ge_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ge_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_geometric_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_geometric_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_geometric_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gradient_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gradient_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_half_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_half_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_heaviside_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_heaviside_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_heaviside_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_histc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_histc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_histc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_histc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_hsplit_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_hstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_hstack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_hstack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_hypot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_i0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_i0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_imag_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_fill_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_fill_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_fill_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_put_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_put_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_put_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_put_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_amin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_mean_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_select_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_inner_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_inner_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_inner_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_int_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_int_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_int_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isclose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isfinite_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isfinite_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isinf_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isinf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isnan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isnan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isneginf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isneginf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isposinf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isposinf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isreal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isreal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isreal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isreal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isreal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_item_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_item_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_item_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_item_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_2inputs_2outputs_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_2inputs_2outputs_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_2inputs_2outputs_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_2inputs_2outputs_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_2inputs_2outputs_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_2inputs_2outputs_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_4inputs_with_extra_args_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_4inputs_with_extra_args_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_4inputs_with_extra_args_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_4inputs_with_extra_args_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_4inputs_with_extra_args_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_4inputs_with_extra_args_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_binary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_binary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_binary_return_by_ref_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_unary_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_unary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_unary_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kron_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kron_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kron_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kthvalue_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kthvalue_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kthvalue_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kthvalue_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lcm_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ldexp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_le_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_le_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_le_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_le_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lgamma_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_cholesky_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_cond_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_det_singular_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_det_singular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_diagonal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_diagonal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_diagonal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_diagonal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_inv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_inv_ex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_ldl_factor_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_ldl_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_lstsq_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_lstsq_grad_oriented_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_lstsq_grad_oriented_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_lu_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_matrix_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_matrix_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_matrix_rank_hermitian_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_matrix_rank_hermitian_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_multi_dot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_multi_dot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_multi_dot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_multi_dot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_norm_subgradients_at_zero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_pinv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_pinv_singular_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_pinv_singular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_slogdet_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_slogdet_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_solve_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_solve_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_solve_triangular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_tensorinv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_tensorsolve_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_tensorsolve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_vander_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_vecdot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_vector_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_vector_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linspace_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linspace_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linspace_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linspace_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linspace_tensor_overload_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log1p_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log1p_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_softmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_softmax_with_dtype_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_softmax_with_dtype_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_softmax_with_dtype_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logaddexp2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logaddexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logcumsumexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_and_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_and_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_and_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_not_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_not_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_not_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_not_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_or_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_or_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_xor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_xor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logspace_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logspace_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logspace_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logspace_tensor_overload_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logspace_tensor_overload_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logspace_tensor_overload_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logsumexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logsumexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lt_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lu_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lu_unpack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mH_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mH_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mH_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mT_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mT_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_amax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_amax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_argmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_argmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_argmin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_argmin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumprod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumprod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumprod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumprod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumsum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumsum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumsum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumsum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_fill_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_logaddexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_logsumexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_logsumexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_logsumexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_logsumexp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_mean_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_mean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_mean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_mean_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_prod_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_prod_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_select_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_select_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_select_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_softmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_std_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_std_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_std_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_sum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_sum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_sum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_sum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_var_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_var_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_matmul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_matmul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_matmul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_matrix_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_matrix_exp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_reduction_no_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_reduction_with_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_reduction_with_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_reduction_with_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_reduction_with_dim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_reduction_with_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_maximum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_maximum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_maximum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_median_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_median_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_median_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_meshgrid_list_of_tensors_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_meshgrid_variadic_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_binary_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_reduction_with_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_reduction_with_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_reduction_with_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_minimum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_minimum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_minimum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_minimum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mode_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_msort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mul_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_multinomial_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nanmean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nanmean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nanmedian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nanmedian_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nanmedian_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nansum_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nansum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nansum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_narrow_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_narrow_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_native_batch_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_native_batch_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ne_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ne_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_neg_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_neg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_empty_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_empty_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_empty_strided_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_empty_strided_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_full_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_full_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_full_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_full_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_full_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_ones_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_ones_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_ones_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_ones_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_zeros_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_adaptive_avg_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_adaptive_max_pool1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_adaptive_max_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_alpha_dropout_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_avg_pool1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_avg_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_avg_pool2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_avg_pool3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_avg_pool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_batch_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_batch_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_batch_norm_without_cudnn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_batch_norm_without_cudnn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_binary_cross_entropy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_binary_cross_entropy_with_logits_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_channel_shuffle_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_channel_shuffle_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv2d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv2d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv_transpose1d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv_transpose2d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv_transpose3d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv_transpose3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_cosine_embedding_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_cosine_embedding_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_cross_entropy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_dropout2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_dropout_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_feature_alpha_dropout_with_train_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_feature_alpha_dropout_with_train_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_fractional_max_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_fractional_max_pool2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_fractional_max_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_gaussian_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_gelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_hardsigmoid_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_hardswish_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_hardswish_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_hardtanh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_huber_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_area_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_bilinear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_linear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_nearest-exact_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_l1_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_l1_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_layer_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_leaky_relu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_pool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_pool2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_unpool1d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_unpool2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_unpool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_unpool3d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_mish_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_mse_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_multi_head_attention_forward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_multi_head_attention_forward_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_multi_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_normalize_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_circular_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_circular_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_constant_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_constant_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_constant_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_replicate_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_replicate_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_replicate_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_replicate_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_replicate_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_replicate_negative_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_replicate_negative_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pairwise_distance_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pairwise_distance_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pairwise_distance_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pdist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_shuffle_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_shuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_shuffle_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_unshuffle_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_unshuffle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_unshuffle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_unshuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_relu_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_rms_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_rrelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_smooth_l1_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_soft_margin_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_soft_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_soft_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softmin_with_dtype_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softmin_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softplus_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softshrink_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_tanhshrink_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_tanhshrink_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_threshold_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_threshold_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_unfold_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_unfold_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_unfold_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_upsample_bilinear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_fro_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_fro_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_fro_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_inf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_inf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_nuc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_normal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_normal_in_place_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_normal_number_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ormqr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_outer_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_outer_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pca_lowrank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pca_lowrank_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_permute_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_permute_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_permute_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pinverse_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pinverse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pinverse_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polar_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_3_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_3_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_3_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_4_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_positive_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_positive_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pow_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pow_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_prod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_put_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_put_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_qr_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rad2deg_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rad2deg_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rad2deg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rad2deg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rand_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rand_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randint_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randint_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randint_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randint_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randint_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randn_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randn_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ravel_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ravel_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ravel_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_real_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_real_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_remainder_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_renorm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_repeat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_repeat_interleave_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_repeat_interleave_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_repeat_interleave_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_repeat_interleave_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reshape_as_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reshape_as_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reshape_as_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reshape_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reshape_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reshape_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reshape_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reshape_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reshape_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize__cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize__cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize_as__cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize_as__cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resolve_conj_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resolve_conj_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resolve_conj_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_roll_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_roll_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_roll_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_roll_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rot90_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rot90_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rot90_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rot90_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_round_decimals_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_round_decimals_neg_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rsqrt_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rsqrt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rsub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rsub_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scalar_tensor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scalar_tensor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_amax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_mean_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_mean_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_sum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_sum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_searchsorted_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sgn_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sgn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sgn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sgn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sigmoid_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sigmoid_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sign_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signal_windows_bartlett_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signal_windows_blackman_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signal_windows_general_cosine_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signal_windows_general_cosine_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signal_windows_kaiser_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signbit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signbit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sin_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_slice_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_slice_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_softmax_with_dtype_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_softmax_with_dtype_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_softmax_with_dtype_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sort_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sparse_mm_reduce_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sparse_sampled_addmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_airy_ai_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_airy_ai_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_j0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_j0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_j1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_j1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_y0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_y0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_y0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_y1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_y1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_chebyshev_polynomial_t_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_chebyshev_polynomial_v_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_chebyshev_polynomial_v_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_chebyshev_polynomial_v_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_entr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_erfcx_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_erfcx_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_hermite_polynomial_h_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_hermite_polynomial_he_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_hermite_polynomial_he_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_hermite_polynomial_he_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i0e_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i0e_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i0e_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i0e_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i1e_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i1e_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i1e_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_laguerre_polynomial_l_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_laguerre_polynomial_l_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_legendre_polynomial_p_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_log_ndtr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_log_ndtr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_i0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_i1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_i1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_k0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_k0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_k1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_k1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_ndtr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_ndtr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_ndtri_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_ndtri_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_scaled_modified_bessel_k0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_scaled_modified_bessel_k1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_scaled_modified_bessel_k1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_t_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_u_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_v_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_v_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_w_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_xlog1py_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_xlog1py_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_xlog1py_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_xlog1py_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_zeta_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_list_args_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_with_sizes_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_with_sizes_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_with_sizes_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_with_sizes_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sqrt_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sqrt_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_square_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_square_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_square_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_multiple_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_multiple_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_multiple_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_multiple_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_multiple_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_stack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_stack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_stack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_std_mean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_std_mean_unbiased_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_std_unbiased_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_std_unbiased_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_std_unbiased_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_stft_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_stft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sub_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sub_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_to_size_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_to_size_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_to_size_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_to_size_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_svd_lowrank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_t_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_t_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_t_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_t_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_take_along_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_take_along_dim_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_take_along_dim_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tanh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tanh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tensor_split_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tensor_split_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tensor_split_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tensordot_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tile_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tile_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tile_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tile_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_to_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_to_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_topk_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trace_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trace_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_transpose_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_transpose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trapz_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trapz_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_triangular_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tril_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tril_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tril_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_triu_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_triu_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_triu_indices_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_true_divide_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trunc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trunc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unbind_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unbind_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unbind_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unflatten_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unflatten_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unfold_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_uniform_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unique_consecutive_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unique_consecutive_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unique_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unravel_index_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_chunk_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_split_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_split_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_split_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_split_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_mean_unbiased_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_mean_unbiased_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vdot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_as_complex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_as_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_as_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_as_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vsplit_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vsplit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vstack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vstack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_where_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_where_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_where_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_where_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_xlogy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_xlogy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_zero__cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_zero__cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_zero__cuda_int16, test/test_meta.py::TestMetaCUDA::test_embedding_bag_byte_prepack_cuda, test/test_meta.py::TestMetaCUDA::test_embedding_bag_dense_backward_mode_1_cuda, test/test_meta.py::TestMetaCUDA::test_embedding_bag_dense_backward_mode_2_cuda, test/test_meta.py::TestMetaCUDA::test_group_norm_backward_output_mask2_cuda, test/test_meta.py::TestMetaCUDA::test_group_norm_backward_output_mask3_cuda, test/test_meta.py::TestMetaCUDA::test_meta_inplace_H_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_H_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_T_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_T_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_T_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_T_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace___getitem___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace___getitem___cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace___getitem___cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace___getitem___cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace___getitem___cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace___getitem___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace___radd___cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace___radd___cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace___radd___cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rdiv___cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rdiv___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rdiv___cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rdiv___cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rdiv___cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rmod___cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rmul___cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rmul___cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rmul___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace___ror___cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rpow___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rpow___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rpow___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rsub___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rxor___cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__batch_norm_with_update_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__chunk_cat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__chunk_cat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_abs_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_acos_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_acos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_acos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_add_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcdiv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcdiv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcdiv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcdiv_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcmul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcmul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcmul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcmul_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcmul_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcmul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_asin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_atan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_atan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_atan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_ceil_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_max_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_max_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_max_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_min_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_min_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_cos_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_cos_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_cos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_cosh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_cosh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_div_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_div_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_div_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_div_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_erf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_erf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_erfc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_erfc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_erfc_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_erfc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_erfc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_exp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_exp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_expm1_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_expm1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_expm1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_floor_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_floor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_frac_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_frac_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_frac_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_lerp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_lerp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_lerp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_lgamma_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_lgamma_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log10_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log10_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log1p_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log1p_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log1p_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_max_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_maximum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_maximum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_minimum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_mul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_mul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_mul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_mul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_neg_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_neg_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_neg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_neg_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_norm_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_pow_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_pow_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_reciprocal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_reciprocal_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_reciprocal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_round_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_round_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_round_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_round_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sigmoid_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sigmoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sign_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sign_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sinh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sinh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sinh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sqrt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sqrt_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sqrt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sub_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sub_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_tan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_tan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_tanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_trunc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_trunc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_trunc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_trunc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_zero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_zero_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_zero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_zero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_zero_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_zero_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_zero_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__segment_reduce_offsets_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__softmax_backward_data_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__unsafe_masked_index_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__upsample_bilinear2d_aa_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_abs_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_abs_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addbmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addcdiv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addcdiv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addcmul_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addmm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addmm_decomposed_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addmv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addmv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_alias_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_alias_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_alias_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_alias_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_alias_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_all_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_allclose_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_amax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_amin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_aminmax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_aminmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_angle_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_angle_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_any_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_any_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_arange_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_arange_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_arange_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_arange_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argmin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argsort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argwhere_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argwhere_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_partial_views_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_partial_views_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_partial_views_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_scatter_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asin_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asinh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asinh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asinh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asinh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atan2_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atan2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atan_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atan_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atan_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atanh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_1d_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_1d_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_2d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_3d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_3d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_3d_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_3d_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bernoulli_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bfloat16_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bfloat16_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_and_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_and_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_and_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_or_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_or_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_right_shift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_right_shift_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_xor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_block_diag_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_block_diag_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_block_diag_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_block_diag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_block_diag_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bool_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bool_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_broadcast_shapes_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_broadcast_to_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bucketize_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bucketize_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bucketize_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cartesian_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cartesian_prod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cartesian_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cartesian_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cartesian_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cat_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cauchy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cdist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cdouble_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cdouble_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cdouble_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ceil_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ceil_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cfloat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cfloat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cfloat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chalf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chalf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chalf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_char_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_char_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_char_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cholesky_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cholesky_inverse_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cholesky_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chunk_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chunk_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chunk_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chunk_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_max_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_max_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_max_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_max_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_max_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_min_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clone_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clone_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clone_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_combinations_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_complex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_conj_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_conj_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_conj_physical_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_conj_physical_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_constant_pad_nd_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_constant_pad_nd_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_contiguous_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_contiguous_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_contiguous_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_contiguous_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_contiguous_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_contiguous_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_copysign_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_copysign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_corrcoef_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cos_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cosh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cosh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cosh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_count_nonzero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_count_nonzero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_count_nonzero_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_count_nonzero_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cov_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cross_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cross_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cross_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cross_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cummax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cummin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cumprod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cumprod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cumprod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cumulative_trapezoid_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_deg2rad_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_deg2rad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_deg2rad_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_deg2rad_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_deg2rad_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diag_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diag_embed_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diag_embed_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diag_embed_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diff_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diff_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diff_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_digamma_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_floor_rounding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_floor_rounding_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_floor_rounding_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_no_rounding_mode_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_no_rounding_mode_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_no_rounding_mode_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_no_rounding_mode_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_trunc_rounding_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_double_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_double_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_double_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dsplit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dsplit_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dsplit_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dsplit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dstack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dstack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dstack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_einsum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_einsum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_like_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_permuted_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_permuted_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_permuted_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_strided_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_strided_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_strided_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_strided_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_strided_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_strided_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_equal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_equal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_erf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_erf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_erf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_erf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_erfinv_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exp2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exp2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exp2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exp_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_as_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expm1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expm1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exponential_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_eye_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_eye_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_eye_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_eye_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fft2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fftshift_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ifft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ifftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ifftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ifftshift_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ifftshift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfft2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_rfft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_rfft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_rfft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_rfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fill_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fill_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flatten_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flatten_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flatten_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flip_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fliplr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fliplr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flipud_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flipud_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flipud_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flipud_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flipud_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_power_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_floor_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_floor_divide_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_floor_divide_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_frac_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_frexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_full_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_full_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_full_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_full_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_full_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_gather_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_gather_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_gather_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ge_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_geqrf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_gradient_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_gradient_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_gradient_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_gt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_gt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_half_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_half_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_half_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_half_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hsplit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hsplit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hstack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hstack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hstack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hypot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_i0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_i0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_i0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_i0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_imag_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_add_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_add_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_fill_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_fill_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_fill_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_put_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_put_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_put_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_amax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_amin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_mean_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_prod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_select_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_select_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_inner_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_inner_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_int_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isclose_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isfinite_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isfinite_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isinf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isinf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isnan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isnan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isnan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isnan_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isposinf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isreal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isreal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isreal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_item_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_item_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_4inputs_with_extra_args_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_4inputs_with_extra_args_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_return_by_ref_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_return_by_ref_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_return_by_ref_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_unary_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_unary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_unary_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_kron_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_kthvalue_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lcm_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_le_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_le_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lerp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lerp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lerp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lgamma_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lgamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lgamma_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cholesky_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cholesky_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cond_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cond_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cross_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cross_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cross_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_det_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_det_singular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_diagonal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_eigh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_eigvals_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_inv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_inv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_inv_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_ldl_factor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_ldl_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lstsq_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lstsq_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lstsq_grad_oriented_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lu_factor_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lu_factor_ex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_matrix_rank_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_multi_dot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_multi_dot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_pinv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_pinv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_pinv_hermitian_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_pinv_singular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_qr_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_qr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_slogdet_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_solve_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_solve_triangular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_svdvals_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_tensorinv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_vecdot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_vecdot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_vecdot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_vecdot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linspace_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linspace_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linspace_tensor_overload_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linspace_tensor_overload_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linspace_tensor_overload_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log10_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log1p_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log1p_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_normal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_softmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_softmax_with_dtype_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_softmax_with_dtype_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logaddexp2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logaddexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logaddexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logcumsumexp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logdet_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_and_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_not_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_not_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_not_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_or_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_xor_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_xor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_xor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logspace_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logspace_tensor_overload_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_long_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_long_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lt_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lu_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lu_unpack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mH_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mH_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mT_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mT_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mT_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_amin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_amin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_amin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_argmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_argmax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_argmin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_cumsum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_cumsum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_cumsum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_fill_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_fill_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_fill_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_logsumexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_logsumexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_logsumexp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_mean_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_mean_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_median_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_normalize_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_select_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_select_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_std_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_sum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_sum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_sum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_sum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_var_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_var_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_matmul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_matrix_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_binary_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_binary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_binary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_binary_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_reduction_no_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_reduction_with_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_median_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_list_of_tensors_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_list_of_tensors_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_list_of_tensors_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_list_of_tensors_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_variadic_tensors_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_variadic_tensors_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_min_binary_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_min_reduction_no_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_min_reduction_no_dim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_min_reduction_with_dim_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_minimum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_minimum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mode_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_movedim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mul_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mul_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_multinomial_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mvlgamma_mvlgamma_p_1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mvlgamma_mvlgamma_p_1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nan_to_num_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nan_to_num_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nanmedian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nanmedian_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nansum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nansum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nansum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_native_dropout_backward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ne_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ne_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ne_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_neg_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_neg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_neg_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_strided_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_strided_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_strided_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_full_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_full_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_full_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_full_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_ones_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_ones_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_ones_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_ones_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_ones_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_zeros_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_zeros_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_zeros_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_zeros_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nextafter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_adaptive_avg_pool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_adaptive_max_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_adaptive_max_pool2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_alpha_dropout_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_alpha_dropout_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_avg_pool2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_batch_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_batch_norm_without_cudnn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv1d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv_transpose1d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv_transpose3d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv_transpose3d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv_transpose3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv_transpose3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_embedding_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_embedding_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_embedding_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_embedding_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_similarity_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cross_entropy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cross_entropy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_dropout2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_dropout2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_embedding_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_glu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_grid_sample_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_group_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardshrink_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardswish_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardswish_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardswish_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardtanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardtanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardtanh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardtanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hinge_embedding_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hinge_embedding_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_instance_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_instance_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_area_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_bilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_bilinear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_nearest-exact_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_nearest-exact_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_trilinear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_kl_div_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_kl_div_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_l1_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_leaky_relu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_linear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_linear_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_linear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_linear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_margin_ranking_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_unpool1d_grad_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_unpool1d_grad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_unpool2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_unpool2d_grad_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_unpool3d_grad_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_unpool3d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_multi_head_attention_forward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_multi_head_attention_forward_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_multi_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_multilabel_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_multilabel_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_nll_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_normalize_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_circular_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_circular_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_circular_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_constant_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_constant_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_reflect_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_reflect_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_negative_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_negative_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_negative_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_negative_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pairwise_distance_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pairwise_distance_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pairwise_distance_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pairwise_distance_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pdist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_shuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_shuffle_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_unshuffle_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_unshuffle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_unshuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_unshuffle_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_unshuffle_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_poisson_nll_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_poisson_nll_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_poisson_nll_loss_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_prelu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_prelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_prelu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_relu6_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_relu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_relu_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_rms_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_rms_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_scaled_dot_product_attention_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_scaled_dot_product_attention_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_smooth_l1_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softmin_with_dtype_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softmin_with_dtype_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softmin_with_dtype_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softmin_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softshrink_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softshrink_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softsign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softsign_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_tanhshrink_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_tanhshrink_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_threshold_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_threshold_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_triplet_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_triplet_margin_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_triplet_margin_with_distance_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_unfold_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_upsample_nearest_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_upsample_nearest_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_upsample_nearest_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_static_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_static_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_static_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_static_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_fro_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_fro_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_inf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_nuc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_nuc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_normal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_normal_in_place_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_normal_number_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_normal_number_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_like_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ormqr_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ormqr_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ormqr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_pca_lowrank_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_permute_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_3_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_3_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_positive_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_pow_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_pow_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_prod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_put_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_put_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_qr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rand_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randint_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randint_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randint_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randint_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randint_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randint_like_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randn_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randn_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randn_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randn_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randn_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ravel_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ravel_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ravel_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_real_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_real_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_real_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_real_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_real_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reciprocal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reciprocal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_remainder_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_interleave_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_interleave_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_interleave_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_interleave_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reshape_as_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reshape_as_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reshape_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reshape_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reshape_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resize__cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resize__cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resize_as__cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_conj_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_conj_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_conj_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_conj_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_conj_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_neg_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_roll_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_roll_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_roll_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_roll_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rot90_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rot90_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_round_decimals_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_round_decimals_neg_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsqrt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsqrt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsqrt_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsub_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsub_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsub_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsub_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsub_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scalar_tensor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_amax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_amin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_amin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_sum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_sum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_sum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_select_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_select_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_select_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_select_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sgn_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sgn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sgn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_short_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_short_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sigmoid_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sign_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signal_windows_bartlett_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signal_windows_blackman_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signal_windows_general_cosine_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signal_windows_hamming_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signal_windows_kaiser_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signbit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signbit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signbit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signbit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sin_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sinc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sinc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sinh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sinh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sinh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_slice_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_slice_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_slice_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_slice_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_slice_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_softmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_softmax_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_airy_ai_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_airy_ai_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_j1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_y0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_y0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_y0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_y1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_y1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_y1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_y1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_y1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_u_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_u_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_v_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_v_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_v_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_w_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_w_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_hermite_polynomial_h_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_hermite_polynomial_h_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_i0e_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_i0e_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_i1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_i1e_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_laguerre_polynomial_l_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_laguerre_polynomial_l_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_legendre_polynomial_p_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_legendre_polynomial_p_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_log_ndtr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_i0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_i1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_i1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_k0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_k0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_k1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_k1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_k1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_k1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_k1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_ndtr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_ndtri_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_ndtri_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_scaled_modified_bessel_k0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_scaled_modified_bessel_k0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_scaled_modified_bessel_k0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_scaled_modified_bessel_k1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_scaled_modified_bessel_k1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_scaled_modified_bessel_k1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_t_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_v_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_spherical_bessel_j0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_spherical_bessel_j0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_spherical_bessel_j0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_xlog1py_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_xlog1py_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_list_args_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_list_args_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sqrt_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_square_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_square_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_square_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_squeeze_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_squeeze_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_squeeze_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_squeeze_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_squeeze_multiple_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_squeeze_multiple_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_stack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_stack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_stack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_std_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_std_mean_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_std_unbiased_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_std_unbiased_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_std_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sub_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_to_size_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_to_size_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_to_size_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_to_size_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_svd_lowrank_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_svd_lowrank_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_t_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_t_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_t_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_t_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_t_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_take_along_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_take_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tanh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tanh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tensor_split_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tensordot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tile_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tile_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tile_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_to_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_to_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_to_sparse_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_topk_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trace_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trace_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_transpose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_transpose_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_transpose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_transpose_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapezoid_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapezoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapezoid_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapz_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapz_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tril_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tril_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tril_indices_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_triu_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_true_divide_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_true_divide_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_true_divide_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_true_divide_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_true_divide_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trunc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trunc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trunc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unflatten_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unflatten_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unflatten_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unflatten_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unfold_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_uniform_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_uniform_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unique_consecutive_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unique_cuda_uint32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unravel_index_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsafe_chunk_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsafe_chunk_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_var_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_var_mean_unbiased_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_var_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vdot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_complex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_real_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vsplit_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vsplit_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vstack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vstack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vstack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_where_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_where_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_where_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_xlogy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_xlogy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zero__cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_like_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_like_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_H_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_T_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_T_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_T_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_T_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___getitem___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace___getitem___cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace___getitem___cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___getitem___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___radd___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___radd___cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___radd___cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rand___cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rdiv___cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmatmul___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmatmul___cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmod___cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmod___cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmod___cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmul___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmul___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rpow___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rpow___cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rpow___cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rsub___cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rsub___cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rxor___cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rxor___cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__batch_norm_with_update_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__batch_norm_with_update_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__chunk_cat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__chunk_cat_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__chunk_cat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__chunk_cat_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_abs_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_abs_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_acos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_acos_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_addcdiv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_addcdiv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_addcmul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_addcmul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_asin_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_asin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_asin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_asin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_atan_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_atan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_ceil_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_ceil_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_ceil_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_max_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_max_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_max_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_min_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cos_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cos_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cos_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cosh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cosh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cosh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_div_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_div_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_div_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erfc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erfc_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erfc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erfc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_exp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_exp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_exp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_expm1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_expm1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_floor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_floor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_floor_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_floor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_frac_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lerp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lerp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lgamma_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lgamma_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lgamma_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lgamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lgamma_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log10_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log10_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log10_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log10_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log10_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log1p_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log1p_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log1p_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_max_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_maximum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_minimum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_mul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_mul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_neg_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_neg_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_neg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_norm_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_pow_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_pow_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_pow_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_reciprocal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_reciprocal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_reciprocal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_reciprocal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_round_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sigmoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sigmoid_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sign_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sinh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sinh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sqrt_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sub_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_tan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_tan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_tanh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_tanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_tanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_trunc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_trunc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_trunc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_zero_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_zero_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__native_batch_norm_legit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__native_batch_norm_legit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__segment_reduce_lengths_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__segment_reduce_lengths_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__unsafe_masked_index_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__unsafe_masked_index_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__unsafe_masked_index_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__unsafe_masked_index_put_accumulate_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_abs_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_acos_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_acosh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_acosh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_add_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_add_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addbmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addbmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addbmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addcdiv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addcmul_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addcmul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addcmul_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addcmul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addmm_decomposed_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addmm_decomposed_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addmm_decomposed_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addmm_decomposed_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addr_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_alias_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_alias_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_all_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_all_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_all_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_allclose_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_allclose_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_amax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_amax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_amax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_amin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_aminmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_angle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_angle_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_any_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_arange_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argsort_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argwhere_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_partial_views_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_asin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_asin_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_asin_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_asin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_asinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atanh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_1d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_2d_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_2d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_3d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_baddbmm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_baddbmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_baddbmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bernoulli_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bernoulli_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bernoulli_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bfloat16_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bfloat16_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bfloat16_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bfloat16_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bfloat16_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bitwise_not_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bitwise_right_shift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bitwise_right_shift_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_block_diag_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_block_diag_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_block_diag_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bool_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bool_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bool_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_tensors_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_tensors_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_tensors_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_to_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_to_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_to_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_to_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bucketize_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bucketize_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bucketize_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_byte_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_byte_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_byte_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cartesian_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cartesian_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cat_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cauchy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cdouble_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cdouble_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cdouble_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cdouble_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ceil_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ceil_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cfloat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cfloat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cfloat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_chalf_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_chalf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_char_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cholesky_inverse_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cholesky_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cholesky_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cholesky_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_chunk_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_chunk_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_chunk_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_chunk_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_max_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_min_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_min_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_min_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_min_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_min_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clone_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clone_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clone_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_column_stack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_column_stack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_combinations_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_combinations_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_combinations_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_conj_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_conj_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_conj_physical_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_conj_physical_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_conj_physical_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_constant_pad_nd_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_constant_pad_nd_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_contiguous_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_contiguous_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_contiguous_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_contiguous_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_contiguous_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_copysign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_corrcoef_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_corrcoef_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cosh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_count_nonzero_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_count_nonzero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cov_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cov_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cross_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cross_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cummax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cummax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cummin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cummin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cummin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cumsum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cumulative_trapezoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cumulative_trapezoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_deg2rad_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_deg2rad_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_embed_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_embed_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_embed_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagflat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagflat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagflat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diff_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diff_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diff_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_digamma_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_digamma_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_digamma_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dist_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dist_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_no_rounding_mode_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_no_rounding_mode_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_no_rounding_mode_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_trunc_rounding_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_trunc_rounding_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dot_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_double_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_double_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_double_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_double_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dsplit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dsplit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dstack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dstack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dstack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_einsum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_einsum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_einsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_permuted_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_permuted_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_permuted_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_strided_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_eq_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_eq_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_equal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erfc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erfinv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erfinv_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_exp2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_exp2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_exp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_exp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_as_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_as_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_as_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expm1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expm1_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_exponential_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_eye_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_eye_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fftshift_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_hfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_hfftn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_hfftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifftshift_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifftshift_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifftshift_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ihfft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ihfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ihfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ihfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ihfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ihfft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ihfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ihfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfft_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfftn_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_rfft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_rfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fill_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fill_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fill_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flatten_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flatten_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flip_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flip_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fliplr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fliplr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flipud_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flipud_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flipud_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flipud_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_float_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_float_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_float_power_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_floor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_floor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_floor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_floor_divide_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_floor_divide_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_frexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_full_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_full_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_full_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_full_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_full_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gather_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gather_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gcd_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ge_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ge_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ge_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ge_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_geometric_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_geometric_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gradient_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_half_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_half_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_heaviside_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_heaviside_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_histc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_histc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_histc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hsplit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hstack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hstack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_i0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_i0_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_i0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_add_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_fill_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_fill_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_fill_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_put_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_put_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_put_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_mean_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_mean_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_select_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_inner_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_int_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_int_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isclose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isclose_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isfinite_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isfinite_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isinf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isnan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isnan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isneginf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isposinf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isposinf_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isposinf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isposinf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isposinf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isreal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isreal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isreal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isreal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isreal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isreal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_item_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_item_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_item_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_2inputs_2outputs_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_4inputs_with_extra_args_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_4inputs_with_extra_args_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_4inputs_with_extra_args_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_4inputs_with_extra_args_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_binary_return_by_ref_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_binary_return_by_ref_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_unary_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_unary_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_unary_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_kron_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_kron_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_kron_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_kron_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_kthvalue_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_kthvalue_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lcm_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lcm_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lcm_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ldexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ldexp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ldexp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_le_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_le_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lerp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_cross_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_cross_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_det_singular_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_det_singular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_diagonal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_diagonal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_diagonal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_diagonal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_ldl_factor_ex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_ldl_factor_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_ldl_solve_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_lstsq_grad_oriented_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_lstsq_grad_oriented_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_lu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_lu_factor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_lu_factor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_lu_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_matrix_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_matrix_rank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_matrix_rank_hermitian_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_multi_dot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_multi_dot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_norm_subgradients_at_zero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_norm_subgradients_at_zero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_pinv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_solve_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_solve_triangular_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_solve_triangular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_solve_triangular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_svd_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_svdvals_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_svdvals_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_tensorinv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_tensorsolve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log10_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log1p_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log1p_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log1p_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_softmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_softmax_with_dtype_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_softmax_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_softmax_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_and_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_and_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_and_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_and_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_not_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_or_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_or_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_or_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logspace_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logspace_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logspace_tensor_overload_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logspace_tensor_overload_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logsumexp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logsumexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_long_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_long_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lu_unpack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mH_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mT_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_amax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_amin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_amin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_argmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_argmax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_cumprod_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_cumprod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_cumprod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_cumprod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_cumsum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_cumsum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_fill_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_logaddexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_logsumexp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_logsumexp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_mean_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_mean_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_median_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_median_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_median_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_normalize_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_normalize_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_normalize_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_prod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_select_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_select_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_select_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_softmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_softmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_softmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_std_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_std_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_std_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_std_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_sum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_sum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_var_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_var_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_matmul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_matmul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_matmul_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_matrix_exp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_matrix_exp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_matrix_exp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_max_binary_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_max_pool2d_with_indices_backward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_max_reduction_no_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_max_reduction_no_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_max_reduction_with_dim_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_max_reduction_with_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_maximum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_maximum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_maximum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_maximum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_list_of_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_variadic_tensors_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_variadic_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_variadic_tensors_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_variadic_tensors_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_reduction_no_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_reduction_no_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_reduction_with_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_reduction_with_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_reduction_with_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_movedim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nan_to_num_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nanmean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nanmean_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nanmedian_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nansum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nansum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_narrow_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_narrow_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_narrow_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_narrow_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_narrow_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_narrow_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_native_dropout_backward_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ne_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ne_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ne_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ne_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_neg_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_neg_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_empty_strided_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_full_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_ones_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_ones_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_ones_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_ones_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_ones_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_zeros_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_zeros_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_zeros_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nextafter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nextafter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_adaptive_avg_pool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_adaptive_avg_pool2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_adaptive_max_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_adaptive_max_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_alpha_dropout_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_avg_pool1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_avg_pool2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_avg_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_avg_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_batch_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_batch_norm_without_cudnn_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_channel_shuffle_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_channel_shuffle_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_channel_shuffle_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv1d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv3d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv_transpose1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv_transpose3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_cosine_embedding_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_cosine_embedding_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_cross_entropy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_dropout3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_dropout_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_elu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_elu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_embedding_bag_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_embedding_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_grid_sample_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_hardshrink_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_hardsigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_hardswish_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_instance_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_interpolate_bilinear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_interpolate_linear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_interpolate_linear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_interpolate_nearest-exact_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_interpolate_nearest_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_interpolate_nearest_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_l1_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_layer_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_linear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_margin_ranking_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_margin_ranking_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_unpool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_mse_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_multi_head_attention_forward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_multi_head_attention_forward_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_multilabel_soft_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_nll_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_nll_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_normalize_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_circular_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_circular_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_circular_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_constant_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_constant_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_reflect_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_reflect_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_replicate_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_replicate_negative_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_replicate_negative_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pixel_shuffle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pixel_shuffle_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pixel_unshuffle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_poisson_nll_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_poisson_nll_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_prelu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_relu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_relu_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_rms_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_selu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_silu_complex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_silu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_silu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_silu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_smooth_l1_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_soft_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softmin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softmin_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softmin_with_dtype_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softplus_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softplus_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softshrink_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softsign_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softsign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softsign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_tanhshrink_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_threshold_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_triplet_margin_loss_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_unfold_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_upsample_bilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_upsample_nearest_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_static_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_static_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_static_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_static_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_norm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_norm_fro_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_norm_inf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_norm_inf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_norm_inf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_norm_inf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_normal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_normal_in_place_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_normal_in_place_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_normal_in_place_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_normal_number_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_like_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_outer_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_outer_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_permute_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_permute_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_permute_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_permute_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_permute_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_4_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_4_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_4_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_positive_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_pow_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_prod_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_put_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_put_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rad2deg_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rand_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randint_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randint_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randn_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randn_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_real_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_real_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reciprocal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reciprocal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_remainder_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_remainder_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_renorm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_renorm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_repeat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_repeat_interleave_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_repeat_interleave_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_repeat_interleave_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_repeat_interleave_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reshape_as_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reshape_as_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reshape_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reshape_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resize__cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resize_as__cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resize_as__cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resolve_conj_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resolve_neg_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resolve_neg_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resolve_neg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_roll_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rot90_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rot90_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rot90_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_round_decimals_0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_round_decimals_3_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_round_decimals_3_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_round_decimals_neg_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsqrt_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsqrt_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsqrt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsqrt_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsub_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scalar_tensor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scalar_tensor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scalar_tensor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_add_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_amax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_amax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_mean_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_sum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_sum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_sum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_searchsorted_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_searchsorted_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_searchsorted_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_searchsorted_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_select_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_select_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_select_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_select_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_select_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_select_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sgn_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sgn_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sgn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sgn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sgn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_short_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sigmoid_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sigmoid_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_blackman_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_cosine_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_gaussian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_general_cosine_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_kaiser_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signbit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signbit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sin_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sinc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sinh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_slice_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_slice_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_slice_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_slice_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_softmax_with_dtype_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sparse_sampled_addmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_airy_ai_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_airy_ai_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_j0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_j0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_j1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_j1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_j1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_y0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_y1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_y1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_y1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_t_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_t_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_u_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_u_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_u_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_v_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_w_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_w_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_entr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_entr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_erfcx_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_erfcx_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_hermite_polynomial_h_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_hermite_polynomial_h_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_hermite_polynomial_he_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i0e_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i1e_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i1e_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i1e_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i1e_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_laguerre_polynomial_l_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_legendre_polynomial_p_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_legendre_polynomial_p_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_log_ndtr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_log_ndtr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_log_ndtr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_log_ndtr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_i0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_i1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_i1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_k0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_k1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_k1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_ndtr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_ndtr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_ndtr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_ndtr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_ndtr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_ndtri_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_scaled_modified_bessel_k0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_scaled_modified_bessel_k0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_scaled_modified_bessel_k1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_shifted_chebyshev_polynomial_t_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_shifted_chebyshev_polynomial_u_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_shifted_chebyshev_polynomial_w_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_spherical_bessel_j0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_spherical_bessel_j0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_zeta_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_zeta_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_zeta_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_zeta_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_list_args_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_list_args_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_with_sizes_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_with_sizes_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_with_sizes_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_with_sizes_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_with_sizes_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_with_sizes_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sqrt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sqrt_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_square_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_square_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_multiple_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_multiple_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_multiple_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_multiple_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_multiple_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_stack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_std_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_std_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_std_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_std_mean_unbiased_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_std_mean_unbiased_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_std_mean_unbiased_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_std_mean_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_stft_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sub_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sum_to_size_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sum_to_size_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_svd_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_svd_lowrank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_svd_lowrank_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_t_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_t_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_t_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_t_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_t_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_take_along_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_take_along_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_take_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_take_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_take_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_take_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tanh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tanh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tensordot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tile_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_sparse_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_sparse_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_sparse_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_sparse_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_topk_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_topk_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_topk_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trace_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trace_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_transpose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_transpose_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trapezoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trapezoid_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trapezoid_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trapezoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trapz_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trapz_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trapz_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_triangular_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tril_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tril_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tril_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_triu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_triu_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_triu_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_true_divide_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_true_divide_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_true_divide_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_true_divide_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_true_divide_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unflatten_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unflatten_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unfold_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unfold_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_uniform_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_uniform_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unique_consecutive_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unique_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unique_cuda_uint32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_chunk_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_chunk_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_chunk_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_chunk_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_split_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsqueeze_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsqueeze_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsqueeze_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsqueeze_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_var_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_var_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_var_mean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_var_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vdot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vdot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vdot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_as_complex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_as_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_as_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_as_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_as_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vstack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vstack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_where_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_where_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_where_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_where_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_where_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_xlogy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_xlogy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zero__cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zero__cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zero__cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_like_cuda_int8 2024-08-07T18:59:06.8612155Z 2024-08-07T18:59:10.1039582Z Running inductor/test_torchinductor_dynamic_shapes 3/4 ... [2024-08-07 18:59:10.103425] 2024-08-07T18:59:10.1043627Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_torchinductor_dynamic_shapes.py', '-m', 'not serial', '--shard-id=3', '--num-shards=4', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 18:59:10.103934] 2024-08-07T19:07:47.3732968Z 2024-08-07T19:07:47.3737928Z inductor/test_torchinductor_dynamic_shapes 3/4 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_torchinductor_dynamic_shapes_3.4_78a0d962c2e1239e_.log 2024-08-07T19:07:47.3833682Z Running 167 items in this shard: test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_AllenaiLongformerBase_repro_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_adaptive_avg_pool2d1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_adaptive_max_pool2d1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_adaptive_max_pool2d2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_add_complex3_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_add_inplace_permuted_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_alexnet_prefix_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_aliased_buffer_reuse_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_aoti_eager_cache_hit_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_aoti_eager_dtype_device_layout_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_aoti_eager_with_persistent_cache_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_arange2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_argmax_argmin2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_argmax_argmin_with_duplicates_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_argmax_min_int32_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_argmax_to_float_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_avg_pool2d1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_avg_pool2d4_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_avg_pool2d8_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_avg_pool2d_backward_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_avg_pool3d_backward2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_avg_pool3d_backward3_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_bernoulli1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_bitwise_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_buffer_use_after_remove_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_builtins_round_float_ndigits_zero_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_cat_inplace_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_cat_of_loops_and_extern_kernel_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_cat_uint8_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_cat_unbacked_2d_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_cat_upcasting_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_clone_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_compar_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_config_option_dont_assume_alignment_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_consecutive_split_cumprod_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_consecutive_split_cumsum_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_constant_pad_1d_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_conv2d_backward_channels_last_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_conv3d_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_convolution5_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_custom_op_1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_custom_op_2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_custom_op_3_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_custom_scan_op_compiled_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_custom_scan_would_split_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_data_type_propogation_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_dist_bf16_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_div2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_div4_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_div8_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_div_precision_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_dropout_deterministic_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_dropout_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_embedding_bag_byte_unpack_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_embedding_bag_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_embedding_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_empty_strided_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_erfinv_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_expanded_reduction_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_fill2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_float_index_expression_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_floordiv_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_fractional_max_pool2d4_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_full_truncation_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_functionalize_rng_wrappers_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_fusing_write_into_disjoint_read_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_gather1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_hardswish_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_index1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_index_dynamic_shapes_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_index_put1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_index_put3_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_inplace_activations_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_inplace_resize_as_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_input_mutation2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_lerp_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_lgamma_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_like_channels_last_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_like_rands_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_linear_mixed_dtype_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_log1p_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_logcumsumexp_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_logcumsumexp_zero_dim_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_masked_fill_promotion_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_max_min_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_max_pool2d3_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_max_pool2d7_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_mm_views_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_multi_threading_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_multilayer_var_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_multilayer_var_lowp_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_mutations_loop_fusion_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_philox_rand_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_bessel_j1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_bessel_y0_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_chebyshev_polynomial_v_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_modified_bessel_k1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_polygamma_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_round_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_scaled_modified_bessel_k1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_shifted_chebyshev_polynomial_u_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_shifted_chebyshev_polynomial_w_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_xlog1py_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pointwise_zeta_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_pow3_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_profiler_mark_wrapper_call_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_reduction1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_reduction2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_reinterpret_dtypeview_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_remove_noop_clone_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_remove_noop_copy_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_repeat_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_repeat_interleave_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_require_stride_expanded_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_scalar_input_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_scatter4_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_scatter5_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_scatter_add1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_scatter_reduce2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sdpa_unaligned_mask_freezing_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sdpa_use_block_ptr_False_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sdpa_use_block_ptr_True_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sgn_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sgn_extremal_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_shape_prop_torch_ones_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_single_elem_indirect_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_slice_mutation1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_slice_mutation2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sort_bool_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sort_stable_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_split_cumsum_low_prec_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_split_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_split_failed_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_split_with_integer_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_split_with_unbacked_symints_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sqrt_dynamic_shapes_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_squeeze1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_squeeze2_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_strided_inputs_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sum4_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sum_dtype_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_sum_keepdims_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_tmp_not_defined_issue1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_tmp_not_defined_issue3_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_to_device_constant_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_topk_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_transposed_propagates_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_uint_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_vectorized_ops_masked_var_novec_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_vertical_fusion1_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_view_on_aliased_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_views5_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_views7_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::DynamicShapesCpuTests::test_where_broadcast_dynamic_shapes_cpu, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_adaptive_max_pool3d_with_indices_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_bool_mask_nobreak_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_full_symbolic_value_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_interpolate_ceil_eq_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_item_materialize_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_math_ops_op1_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_math_ops_op2_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_math_ops_op7_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_math_ops_op8_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_nonzero_size_factory_nobreak_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_shape_as_constant_reciprocal_float_exp_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_sub_constant_folding_cuda, test/inductor/test_torchinductor_dynamic_shapes.py::TestInductorDynamicCUDA::test_unbacked_save_for_backwards_cuda 2024-08-07T19:07:47.3927411Z 2024-08-07T19:07:51.2166352Z Running inductor/test_cuda_cpp_wrapper 1/1 ... [2024-08-07 19:07:51.216059] 2024-08-07T19:07:51.2170565Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_cuda_cpp_wrapper.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 19:07:51.216520] 2024-08-07T19:07:59.0988077Z 2024-08-07T19:07:59.0989375Z inductor/test_cuda_cpp_wrapper 1/1 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_cuda_cpp_wrapper_1.1_5bc913c3d0b0a585_.log 2024-08-07T19:07:59.0990285Z 2024-08-07T19:08:02.9109823Z Running test_ops_jit 3/3 ... [2024-08-07 19:08:02.910407] 2024-08-07T19:08:02.9114438Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'test_ops_jit.py', '-m', 'not serial', '--shard-id=3', '--num-shards=3', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 19:08:02.911020] 2024-08-07T19:08:59.2055934Z 2024-08-07T19:08:59.2057531Z test_meta 5/5 was successful, full logs can be found in artifacts with path test/test-reports/test_meta_5.5_d6d8ec1fb3599b2f_.log 2024-08-07T19:08:59.5346628Z Running 8148 items in this shard: test/test_meta.py::TestMetaConverter::test_channels_last_non_leaf, test/test_meta.py::TestMetaConverter::test_tensor_outlives_converter, test/test_meta.py::TestMetaConverter::test_view_of_leaf, test/test_meta.py::TestMetaCUDA::test_batch_norm_backward_output_mask3_cuda, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype___rmul___cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype___rsub___cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_atan2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_div_floor_rounding_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_eq_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_igamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_minimum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_ne_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_remainder_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype__refs_xlogy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_atan2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_div_floor_rounding_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_fmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_fmod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_logaddexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_logical_or_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_min_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_ne_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_nextafter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_special_chebyshev_polynomial_v_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_special_zeta_cuda_float32, test/test_meta.py::TestMetaCUDA::test_binary_ufuncs_mixed_dtype_sub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_H_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_T_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___getitem___cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___radd___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___radd___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___radd___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rand___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rand___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rdiv___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rdiv___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rdiv___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rmul___cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rmul___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rpow___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rsub___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rsub___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rsub___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rsub___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rsub___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rxor___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace___rxor___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__chunk_cat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__chunk_cat_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__chunk_cat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__chunk_cat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__chunk_cat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__chunk_cat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_abs_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_abs_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_abs_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_acos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_acos_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcdiv_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcdiv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcdiv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_addcdiv_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_asin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_asin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_asin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_asin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_atan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_ceil_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_clamp_max_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_clamp_max_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_clamp_max_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_clamp_max_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_clamp_min_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cosh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cosh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cosh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cosh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_cosh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_div_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_div_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_div_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_erf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_erfc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_erfc_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_erfc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_erfc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_exp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_exp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_expm1_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_expm1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_floor_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_floor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_floor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_floor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_floor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_frac_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_frac_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_lerp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_lerp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_lgamma_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_lgamma_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log10_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log10_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log1p_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log1p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log1p_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log1p_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_log_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_max_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_maximum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_maximum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_minimum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_mul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_mul_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_mul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_neg_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_neg_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_pow_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_reciprocal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_round_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_round_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_round_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sign_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sign_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sinh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sqrt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sqrt_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sqrt_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sqrt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sqrt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sqrt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sub_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_sub_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_tan_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_tan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_tan_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_tan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_tanh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_tanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_trunc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_trunc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_trunc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_trunc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_trunc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_trunc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_trunc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_zero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__foreach_zero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__native_batch_norm_legit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__native_batch_norm_legit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__softmax_backward_data_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__unsafe_masked_index_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__unsafe_masked_index_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__unsafe_masked_index_put_accumulate_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__unsafe_masked_index_put_accumulate_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__unsafe_masked_index_put_accumulate_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace__upsample_bilinear2d_aa_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_abs_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_abs_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_acos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_acos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_acosh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_acosh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_acosh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_acosh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addbmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addcmul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addmm_decomposed_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addmv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addmv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addmv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_addr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_alias_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_alias_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_alias_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_all_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_all_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_all_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_allclose_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_amax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_amax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_amin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_aminmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_angle_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_angle_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_any_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_any_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_any_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_any_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argmax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argsort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argsort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argsort_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argsort_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argsort_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argwhere_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_argwhere_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_partial_views_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_as_strided_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asinh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asinh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_asinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atan2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atanh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atanh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_2d_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_2d_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_2d_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_2d_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_3d_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_3d_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_atleast_3d_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_baddbmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bernoulli_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bfloat16_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bfloat16_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bfloat16_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bfloat16_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_and_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_and_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_left_shift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_left_shift_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_not_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_not_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_or_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_right_shift_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_xor_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bitwise_xor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_block_diag_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_block_diag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_block_diag_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bool_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bool_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_broadcast_tensors_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_broadcast_to_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_broadcast_to_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_bucketize_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_byte_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_byte_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_byte_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_byte_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cartesian_prod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cartesian_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cdist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cdouble_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cdouble_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ceil_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ceil_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ceil_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cfloat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_chalf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_chalf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_char_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_char_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cholesky_inverse_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cholesky_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cholesky_solve_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_chunk_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_chunk_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_min_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_min_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_min_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_min_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clamp_min_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clone_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_clone_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_column_stack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_column_stack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_column_stack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_combinations_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_physical_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_conj_physical_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_constant_pad_nd_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_constant_pad_nd_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_constant_pad_nd_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_contiguous_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_copysign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_copysign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cosh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cosh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cov_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cov_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cov_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cov_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cummax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cummax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cummax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumprod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumprod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumprod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumsum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumulative_trapezoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_cumulative_trapezoid_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_deg2rad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_deg2rad_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diag_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diag_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diag_embed_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagflat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_scatter_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diagonal_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diff_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_diff_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_digamma_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_floor_rounding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_floor_rounding_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_no_rounding_mode_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_no_rounding_mode_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_no_rounding_mode_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_trunc_rounding_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_trunc_rounding_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_div_trunc_rounding_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_double_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_double_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dsplit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dsplit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dsplit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dsplit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dstack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_dstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_einsum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_permuted_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_permuted_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_permuted_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_strided_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_empty_strided_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_eq_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_eq_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_equal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_equal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erfc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erfc_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_erfc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_as_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_as_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expand_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expm1_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expm1_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_expm1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_exponential_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_eye_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_eye_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fftn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fftshift_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fftshift_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_fftshift_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_hfftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifft2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifftshift_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifftshift_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ifftshift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_ihfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfft2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_irfftn_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_rfft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_rfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_rfftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_rfftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fft_rfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fill_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flatten_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flatten_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flip_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fliplr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fliplr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fliplr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flipud_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flipud_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flipud_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flipud_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_flipud_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_power_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_power_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_power_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_float_power_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_floor_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_floor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_floor_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_floor_divide_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_fmod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_full_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_full_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_full_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_full_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_full_like_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gather_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gather_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gcd_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gcd_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gcd_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gcd_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ge_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ge_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_geometric_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_geometric_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gradient_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gradient_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gradient_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gradient_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_grid_sampler_2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_gt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_half_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_half_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_half_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_histc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_hsplit_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_hsplit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_hsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_hstack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_hstack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_i0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_imag_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_add_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_fill_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_fill_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_fill_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_fill_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_fill_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_put_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_put_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_amax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_amin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_reduce_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_select_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_index_select_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_inner_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_inner_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_int_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_int_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isclose_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isfinite_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isfinite_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isinf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isinf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isinf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isinf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isnan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isnan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isnan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isnan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isposinf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isposinf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_isreal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_item_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_2inputs_2outputs_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_2inputs_2outputs_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_4inputs_with_extra_args_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_4inputs_with_extra_args_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_binary_return_by_ref_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_binary_return_by_ref_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_binary_return_by_ref_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_unary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_jiterator_unary_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_kron_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_kthvalue_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lcm_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ldexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ldexp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ldexp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ldexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ldexp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_le_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lerp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lerp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lgamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lgamma_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cholesky_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cholesky_ex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cholesky_ex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cross_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_cross_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_det_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_diagonal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_diagonal_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_diagonal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_eig_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_eig_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_eigh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_eigh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_eigh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_eigvals_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_eigvalsh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_inv_ex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_ldl_factor_ex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_ldl_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_lstsq_grad_oriented_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_lstsq_grad_oriented_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_lu_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_lu_factor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_lu_factor_ex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_matrix_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_matrix_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_matrix_power_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_matrix_rank_hermitian_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_multi_dot_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_pinv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_pinv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_pinv_singular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_slogdet_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_solve_triangular_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_svdvals_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_tensorinv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_tensorinv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_tensorsolve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_vander_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_vander_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_vander_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_vecdot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_vecdot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linalg_vecdot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linspace_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linspace_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linspace_tensor_overload_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_linspace_tensor_overload_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log10_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log10_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log10_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log10_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log10_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log1p_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log2_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_normal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_softmax_with_dtype_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_softmax_with_dtype_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_softmax_with_dtype_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_softmax_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_log_softmax_with_dtype_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logaddexp2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logaddexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logaddexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logaddexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logaddexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logdet_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_and_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_and_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_not_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_or_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_or_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_or_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_xor_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_xor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logical_xor_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logit_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logspace_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logspace_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logspace_tensor_overload_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logspace_tensor_overload_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logsumexp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logsumexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_logsumexp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_long_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_long_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_long_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_long_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lu_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_lu_unpack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mH_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mT_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mT_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mT_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mT_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mT_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_amin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_amin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_argmax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumprod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumprod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumprod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumprod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumprod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumsum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_cumsum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_fill_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_log_softmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_logaddexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_logaddexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_logsumexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_logsumexp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_mean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_mean_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_normalize_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_prod_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_select_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_softmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_softmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_std_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_std_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_std_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_sum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_sum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_masked_var_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_matrix_exp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_matrix_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_binary_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_binary_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_binary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_reduction_no_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_reduction_no_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_max_reduction_with_dim_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_maximum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_maximum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_median_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_median_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_median_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_variadic_tensors_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_meshgrid_variadic_tensors_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_min_binary_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_min_reduction_no_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_min_reduction_with_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_minimum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mode_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mode_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mode_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_movedim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_movedim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_movedim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_movedim_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_msort_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_msort_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mul_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_mvlgamma_mvlgamma_p_5_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nan_to_num_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nan_to_num_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nanmean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nansum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nansum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nansum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nansum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nansum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_narrow_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_narrow_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_narrow_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_narrow_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_narrow_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_native_batch_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ne_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ne_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ne_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_neg_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_neg_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_empty_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_empty_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_empty_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_empty_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_full_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_ones_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_ones_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_ones_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_zeros_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_zeros_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_zeros_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_new_zeros_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nextafter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_adaptive_avg_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_adaptive_avg_pool2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_adaptive_max_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_avg_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_batch_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_batch_norm_without_cudnn_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_batch_norm_without_cudnn_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_binary_cross_entropy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_binary_cross_entropy_with_logits_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_channel_shuffle_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_channel_shuffle_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv1d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv1d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv3d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv_transpose1d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv_transpose1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_conv_transpose3d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_cosine_embedding_loss_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_cosine_embedding_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_cosine_embedding_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_cosine_embedding_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_cross_entropy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_dropout2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_dropout2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_dropout3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_dropout_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_dropout_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_dropout_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_embedding_bag_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_embedding_bag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_fractional_max_pool2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_fractional_max_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_gaussian_nll_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_glu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_group_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hardshrink_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hardshrink_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hardshrink_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hardtanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hardtanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hardtanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_hinge_embedding_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_bilinear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_nearest-exact_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_nearest_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_interpolate_trilinear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_kl_div_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_l1_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_layer_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_leaky_relu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_linear_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_linear_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_linear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_logsigmoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_margin_ranking_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_margin_ranking_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_pool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_pool3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_unpool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_unpool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_max_unpool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_mish_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_mse_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multi_head_attention_forward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multi_margin_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multilabel_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_multilabel_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_nll_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_normalize_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_circular_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_circular_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_circular_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_constant_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_reflect_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_reflect_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_reflect_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_replicate_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_replicate_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pad_replicate_negative_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pairwise_distance_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pairwise_distance_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pairwise_distance_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pairwise_distance_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pixel_shuffle_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pixel_shuffle_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pixel_unshuffle_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pixel_unshuffle_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pixel_unshuffle_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_pixel_unshuffle_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_poisson_nll_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_poisson_nll_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_prelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_prelu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_relu6_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_relu6_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_rms_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_rms_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_rrelu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_scaled_dot_product_attention_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_silu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_soft_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softmin_with_dtype_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softmin_with_dtype_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softplus_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softshrink_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softsign_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softsign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_softsign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_tanhshrink_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_tanhshrink_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_triplet_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_triplet_margin_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_triplet_margin_loss_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_triplet_margin_with_distance_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_triplet_margin_with_distance_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_unfold_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_upsample_bilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_upsample_nearest_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nn_functional_upsample_nearest_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nonzero_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nonzero_static_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nonzero_static_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nonzero_static_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_nonzero_static_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_fro_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_fro_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_fro_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_inf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_inf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_nuc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_norm_nuc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_normal_in_place_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_normal_in_place_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_normal_number_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_normal_number_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_normal_number_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ones_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ones_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ones_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ones_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ormqr_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ormqr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_outer_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_outer_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_outer_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_pca_lowrank_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_pca_lowrank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_permute_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_permute_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_permute_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_pinverse_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_pinverse_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polar_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_3_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_3_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_4_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_polygamma_polygamma_n_4_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_positive_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_pow_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_pow_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_pow_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_pow_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_pow_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_prod_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_put_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_put_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_put_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_qr_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_qr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_quantile_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rand_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randint_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_randn_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ravel_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_ravel_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_real_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reciprocal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reciprocal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_renorm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_renorm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_interleave_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_interleave_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_repeat_interleave_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_as_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_as_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_reshape_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resize__cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resize__cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resize__cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resize_as__cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resize_as__cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resolve_conj_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_resolve_conj_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_roll_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_roll_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rot90_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rot90_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rot90_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rot90_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_round_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_round_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_round_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_round_decimals_0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_round_decimals_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_round_decimals_3_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsqrt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsqrt_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsqrt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsqrt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsqrt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsub_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_rsub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scalar_tensor_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scalar_tensor_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scalar_tensor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_scatter_reduce_sum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_searchsorted_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_searchsorted_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_select_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_select_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_select_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_select_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_select_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_select_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_select_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_select_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_select_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sgn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_short_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_short_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_short_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_short_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_blackman_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_gaussian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signal_windows_general_cosine_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signbit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signbit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_signbit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sin_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sin_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sinh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_slice_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_slice_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_slice_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_slice_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_slice_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_softmax_with_dtype_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sort_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sort_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sparse_mm_reduce_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sparse_sampled_addmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_j0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_j0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_y0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_y0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_y0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_bessel_y1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_t_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_u_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_v_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_v_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_v_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_v_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_w_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_chebyshev_polynomial_w_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_entr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_entr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_erfcx_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_hermite_polynomial_h_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_hermite_polynomial_he_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_i0e_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_i0e_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_i1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_i1e_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_i1e_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_laguerre_polynomial_l_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_legendre_polynomial_p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_legendre_polynomial_p_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_legendre_polynomial_p_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_log_ndtr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_log_ndtr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_i1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_i1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_i1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_i1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_k0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_k1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_modified_bessel_k1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_ndtr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_ndtri_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_ndtri_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_scaled_modified_bessel_k0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_scaled_modified_bessel_k1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_t_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_t_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_v_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_v_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_spherical_bessel_j0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_spherical_bessel_j0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_xlog1py_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_xlog1py_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_xlog1py_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_zeta_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_zeta_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_special_zeta_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_list_args_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_list_args_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_list_args_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_split_with_sizes_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sqrt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_square_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_square_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_square_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_square_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_square_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_square_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_squeeze_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_squeeze_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_squeeze_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_squeeze_multiple_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_stack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_stack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_std_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_std_mean_unbiased_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_std_mean_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_std_unbiased_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_std_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_std_unbiased_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sub_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sub_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sum_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sum_to_size_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_sum_to_size_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_svd_lowrank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_t_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_along_dim_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_along_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_along_dim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_take_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tensor_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tensor_split_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tensordot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tile_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tile_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_sparse_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_sparse_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_sparse_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_sparse_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_sparse_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_to_sparse_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_topk_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_topk_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_torch_ops_aten__efficient_attention_forward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trace_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trace_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trace_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trace_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_transpose_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_transpose_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_transpose_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_transpose_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trapezoid_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trapezoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trapezoid_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trapezoid_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trapezoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_triangular_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tril_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tril_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tril_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tril_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tril_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tril_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_tril_indices_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_triu_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_triu_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_true_divide_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_true_divide_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trunc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_trunc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unbind_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unbind_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unbind_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unflatten_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unflatten_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unfold_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unfold_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unfold_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unfold_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_uniform_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_uniform_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_uniform_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unique_consecutive_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unique_consecutive_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unique_cuda_uint32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_chunk_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_chunk_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_chunk_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_chunk_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_split_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_split_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_split_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_split_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsafe_split_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsqueeze_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsqueeze_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_unsqueeze_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_var_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_var_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_as_complex_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_as_complex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_as_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_view_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_vsplit_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_vstack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_vstack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_where_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_where_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_where_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_where_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_where_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_xlogy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zero__cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zero__cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zero__cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zero__cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zeros_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zeros_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zeros_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zeros_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_inplace_zeros_like_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_H_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_T_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_T_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_T_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_T_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_T_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___getitem___cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___getitem___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___radd___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___radd___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___radd___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rdiv___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rdiv___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rdiv___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rmod___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rmod___cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rmod___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rmod___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rmul___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rpow___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rpow___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rsub___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rsub___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rsub___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rsub___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace___rsub___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__chunk_cat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__chunk_cat_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_abs_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_abs_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_acos_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_acos_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_acos_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_add_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_addcdiv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_addcdiv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_addcdiv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_addcmul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_addcmul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_addcmul_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_addcmul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_atan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_atan_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_ceil_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_ceil_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_clamp_max_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_clamp_max_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_clamp_min_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_clamp_min_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_clamp_min_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_cos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_cos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_cosh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_cosh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_cosh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_div_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_div_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erfc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erfc_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erfc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erfc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_erfc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_exp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_exp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_exp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_expm1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_expm1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_floor_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_floor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_floor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_floor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_floor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_frac_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lerp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lgamma_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lgamma_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_lgamma_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log10_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log10_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log10_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log1p_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log1p_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_log_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_maximum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_maximum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_minimum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_minimum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_mul_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_neg_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_neg_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_neg_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_neg_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_norm_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_norm_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_pow_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_pow_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_pow_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_pow_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_reciprocal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_reciprocal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_reciprocal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_round_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_round_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_round_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_round_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_round_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sigmoid_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sigmoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sigmoid_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sign_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sqrt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sqrt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sub_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_sub_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_tan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_tan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_tan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_tan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_tanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_tanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_tanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_tanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_trunc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_trunc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_zero_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_zero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_zero_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_zero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__foreach_zero_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__segment_reduce_lengths_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__segment_reduce_offsets_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__softmax_backward_data_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__unsafe_masked_index_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__unsafe_masked_index_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__unsafe_masked_index_put_accumulate_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__unsafe_masked_index_put_accumulate_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__unsafe_masked_index_put_accumulate_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace__upsample_bilinear2d_aa_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_abs_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_abs_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_abs_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_abs_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_abs_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_abs_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_acos_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_acos_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_acosh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_acosh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_add_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_add_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addbmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addcdiv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addcdiv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addcmul_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addmm_decomposed_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addmm_decomposed_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addmv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_addr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_alias_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_alias_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_alias_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_alias_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_alias_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_all_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_all_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_allclose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_amin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_aminmax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_angle_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_angle_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_angle_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_any_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_any_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_arange_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_arange_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argmax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argmin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argmin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argsort_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argsort_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argwhere_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argwhere_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argwhere_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argwhere_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_argwhere_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_partial_views_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_partial_views_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_scatter_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_scatter_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_as_strided_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_asin_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_asin_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_asin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_asin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_asin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_asinh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_asinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_asinh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_1d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_1d_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_2d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_2d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_2d_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_2d_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_2d_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_3d_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_atleast_3d_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_baddbmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bfloat16_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bincount_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bincount_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bincount_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_and_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_and_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_left_shift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_or_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_right_shift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_right_shift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_right_shift_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bitwise_xor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_block_diag_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bool_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bool_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_broadcast_tensors_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_broadcast_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_broadcast_to_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_broadcast_to_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bucketize_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bucketize_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_bucketize_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_byte_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_byte_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cartesian_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cartesian_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cartesian_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cat_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cauchy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cdouble_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ceil_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ceil_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cfloat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cfloat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_chalf_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_chalf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_chalf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_chalf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cholesky_inverse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cholesky_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cholesky_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_chunk_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_max_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_max_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clamp_min_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_clone_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_column_stack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_combinations_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_complex_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_complex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_conj_physical_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_constant_pad_nd_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_contiguous_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_contiguous_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_copysign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_copysign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_copysign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_corrcoef_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_corrcoef_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_corrcoef_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_corrcoef_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cos_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cos_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cosh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cosh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cosh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_count_nonzero_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_count_nonzero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_count_nonzero_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_count_nonzero_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_count_nonzero_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cov_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cov_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cov_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cov_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cross_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cross_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cross_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cummin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cumprod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cumsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cumsum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cumulative_trapezoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cumulative_trapezoid_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_cumulative_trapezoid_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_deg2rad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_deg2rad_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diag_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diag_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diag_embed_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diag_embed_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diag_embed_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagflat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagflat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diagonal_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diff_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diff_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diff_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_diff_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_digamma_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dist_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dist_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dist_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dist_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_floor_rounding_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_no_rounding_mode_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_no_rounding_mode_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_no_rounding_mode_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_no_rounding_mode_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_no_rounding_mode_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_trunc_rounding_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_div_trunc_rounding_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_double_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_dsplit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_einsum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_einsum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_like_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_permuted_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_permuted_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_permuted_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_strided_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_strided_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_strided_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_empty_strided_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eq_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eq_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eq_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eq_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eq_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_equal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erfc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erfc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erfinv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erfinv_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erfinv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erfinv_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_erfinv_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_as_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_as_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expand_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expm1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_expm1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exponential_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exponential_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_exponential_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eye_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eye_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_eye_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fft_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fftn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fftshift_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fftshift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fftshift_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_fftshift_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_hfftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifft_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifftn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifftshift_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ifftshift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ihfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ihfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_ihfft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_irfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_rfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_rfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_rfft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_rfftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fft_rfftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fill_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fill_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fill_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fill_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fill_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flatten_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flatten_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flatten_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flip_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flip_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flip_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flip_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fliplr_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fliplr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_flipud_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_float_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_float_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_float_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_float_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_float_power_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_floor_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_floor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_floor_divide_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_floor_divide_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_fmod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_frac_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_full_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_full_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_full_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_full_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_full_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gather_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gcd_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_geometric_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_geometric_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_geqrf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gradient_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gradient_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gradient_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gradient_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_gt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_half_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_half_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_heaviside_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_heaviside_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_histc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_histc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hsplit_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hsplit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hsplit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hsplit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hstack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hstack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_hypot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_i0_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_i0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_i0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_igamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_add_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_put_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_amin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_amin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_reduce_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_select_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_select_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_index_select_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_inner_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_int_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_int_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_int_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_int_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isclose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isclose_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isfinite_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isfinite_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isfinite_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isinf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isinf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isinf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isnan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isnan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isneginf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isneginf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isposinf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isposinf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isreal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isreal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isreal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_isreal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_2inputs_2outputs_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_2inputs_2outputs_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_2inputs_2outputs_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_4inputs_with_extra_args_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_binary_return_by_ref_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_unary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_unary_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_jiterator_unary_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_kron_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_kthvalue_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lcm_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ldexp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ldexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ldexp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ldexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ldexp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lerp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lerp_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lerp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lerp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lerp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lgamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lgamma_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_cond_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_cond_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_cross_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_det_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_diagonal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_eig_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_eig_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_eigh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_eigvals_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_householder_product_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_inv_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_ldl_factor_ex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_ldl_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_ldl_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_lstsq_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_lstsq_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_lu_factor_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_lu_factor_ex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_matrix_norm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_matrix_rank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_matrix_rank_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_multi_dot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_norm_subgradients_at_zero_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_pinv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_pinv_singular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_slogdet_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_solve_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_solve_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_solve_triangular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_svd_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_svdvals_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_vander_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_vander_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_vander_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_vector_norm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linalg_vector_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linspace_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_linspace_tensor_overload_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log10_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log10_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log10_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log10_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log1p_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_softmax_with_dtype_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_softmax_with_dtype_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_softmax_with_dtype_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_log_softmax_with_dtype_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logaddexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logcumsumexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logcumsumexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logdet_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_and_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_and_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_and_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_and_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_not_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_or_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_or_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_xor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logical_xor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_tensor_overload_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_tensor_overload_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logspace_tensor_overload_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logsumexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logsumexp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_logsumexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_long_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_long_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_long_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lu_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lu_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_lu_unpack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mT_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mT_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mT_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_amax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_amin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_argmax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_argmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_argmin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_argmin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_cumprod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_cumsum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_fill_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_fill_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_fill_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_logaddexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_mean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_prod_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_select_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_select_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_select_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_softmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_std_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_sum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_sum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_masked_sum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_binary_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_binary_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_binary_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_pool2d_with_indices_backward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_pool2d_with_indices_backward_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_no_dim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_with_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_with_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_with_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_with_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_with_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_max_reduction_with_dim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_maximum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_median_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_meshgrid_list_of_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_meshgrid_list_of_tensors_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_meshgrid_variadic_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_min_binary_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_min_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_min_binary_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_min_reduction_no_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_min_reduction_no_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_min_reduction_with_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_minimum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_minimum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_minimum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mode_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mode_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mode_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mode_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_movedim_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_movedim_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_msort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_msort_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mul_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mul_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mul_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nan_to_num_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nan_to_num_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nan_to_num_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nan_to_num_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nanmean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nansum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_narrow_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_narrow_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_narrow_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_narrow_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_narrow_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_native_batch_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_native_batch_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_native_batch_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_native_layer_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ne_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ne_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ne_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ne_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ne_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ne_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_empty_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_empty_strided_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_full_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_full_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_ones_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_ones_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_ones_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_ones_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_ones_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_zeros_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_new_zeros_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_adaptive_avg_pool1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_adaptive_avg_pool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_adaptive_avg_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_adaptive_avg_pool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_adaptive_max_pool1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_adaptive_max_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_alpha_dropout_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_alpha_dropout_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_batch_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_batch_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_batch_norm_without_cudnn_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_binary_cross_entropy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_binary_cross_entropy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_channel_shuffle_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_channel_shuffle_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_channel_shuffle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv1d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv3d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv_transpose3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_conv_transpose3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_cosine_embedding_loss_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_cosine_embedding_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_cosine_embedding_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_cross_entropy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_dropout3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_dropout3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_dropout_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_embedding_bag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_embedding_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_fractional_max_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_gaussian_nll_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_gelu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_glu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_glu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_grid_sample_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_group_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_group_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_hardswish_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_hardtanh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_hardtanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_hinge_embedding_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_instance_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_instance_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_area_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_area_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_bicubic_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_nearest_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_interpolate_nearest_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_l1_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_l1_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_layer_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_layer_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_layer_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_linear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_margin_ranking_loss_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_max_pool2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_max_unpool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_max_unpool1d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_max_unpool2d_grad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_max_unpool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_mish_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_mse_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_multi_head_attention_forward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_multi_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_multilabel_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_multilabel_soft_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_normalize_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_normalize_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_normalize_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_circular_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_circular_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_circular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_circular_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_constant_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_constant_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_reflect_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_reflect_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_reflect_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_replicate_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_replicate_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_replicate_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_replicate_negative_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pad_replicate_negative_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pairwise_distance_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pairwise_distance_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pairwise_distance_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pixel_shuffle_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pixel_shuffle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pixel_shuffle_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pixel_shuffle_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_pixel_unshuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_poisson_nll_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_poisson_nll_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_poisson_nll_loss_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_prelu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_prelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_relu6_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_relu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_rms_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_rms_norm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_selu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_silu_complex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_soft_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_softmin_with_dtype_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_softmin_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_softplus_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_softsign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_softsign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_tanhshrink_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_tanhshrink_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_tanhshrink_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_threshold_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_threshold_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_triplet_margin_loss_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_triplet_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_triplet_margin_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_unfold_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_unfold_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_upsample_bilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nn_functional_upsample_nearest_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nonzero_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nonzero_static_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_nonzero_static_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_norm_fro_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_norm_fro_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_norm_inf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_norm_nuc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_normal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ones_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ormqr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ormqr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_outer_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_outer_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_outer_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_permute_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_permute_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_3_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_4_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_polygamma_polygamma_n_4_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_positive_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_positive_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_pow_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_pow_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_prod_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_put_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_put_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_qr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_quantile_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rad2deg_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rad2deg_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rad2deg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rand_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rand_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randint_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randint_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randint_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randn_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randn_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_randn_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ravel_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ravel_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_ravel_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_real_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_real_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_real_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reciprocal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reciprocal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_remainder_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_remainder_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_remainder_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_renorm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_interleave_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_interleave_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_interleave_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_interleave_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_interleave_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_repeat_interleave_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reshape_as_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reshape_as_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reshape_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reshape_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_reshape_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resize__cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resize__cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resize__cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resize_as__cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resolve_conj_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resolve_conj_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resolve_conj_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_resolve_neg_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_roll_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_roll_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_roll_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rot90_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rot90_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rot90_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_round_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_round_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_round_decimals_0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_round_decimals_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_round_decimals_3_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_round_decimals_3_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rsub_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_rsub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scalar_tensor_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scalar_tensor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_add_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_add_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_amax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_amin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_prod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_sum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_sum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_scatter_reduce_sum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_searchsorted_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_select_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sgn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sgn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sgn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_short_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_short_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_short_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_short_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sigmoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sign_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_signal_windows_exponential_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_signal_windows_general_hamming_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_signbit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_signbit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sinc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_slice_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_slice_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_slice_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_slice_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_slice_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_slice_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_slice_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_softmax_with_dtype_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_softmax_with_dtype_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_softmax_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sparse_mm_reduce_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_airy_ai_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_airy_ai_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_airy_ai_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_airy_ai_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_j0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_j0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_j1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_j1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_j1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_y0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_y0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_y0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_y1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_bessel_y1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_t_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_t_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_u_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_u_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_v_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_v_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_v_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_w_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_chebyshev_polynomial_w_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_entr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_entr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_entr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_entr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_erfcx_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_erfcx_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_hermite_polynomial_he_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_hermite_polynomial_he_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_hermite_polynomial_he_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_i0e_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_i0e_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_i1e_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_i1e_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_laguerre_polynomial_l_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_legendre_polynomial_p_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_legendre_polynomial_p_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_log_ndtr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_log_ndtr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_log_ndtr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_i0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_i1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_i1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_i1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_i1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_k0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_modified_bessel_k0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_ndtr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_ndtr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_ndtri_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_scaled_modified_bessel_k0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_scaled_modified_bessel_k0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_scaled_modified_bessel_k1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_scaled_modified_bessel_k1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_t_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_u_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_v_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_w_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_shifted_chebyshev_polynomial_w_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_spherical_bessel_j0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_spherical_bessel_j0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_spherical_bessel_j0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_xlog1py_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_xlog1py_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_xlog1py_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_zeta_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_zeta_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_special_zeta_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_list_args_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_with_sizes_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_with_sizes_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_with_sizes_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_with_sizes_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_with_sizes_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_with_sizes_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_split_with_sizes_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sqrt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_square_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_square_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_square_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_square_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_square_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_squeeze_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_stack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_stack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_stack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_std_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_std_mean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_std_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_std_mean_unbiased_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_std_mean_unbiased_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_std_mean_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_std_mean_unbiased_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sub_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sum_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sum_to_size_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_sum_to_size_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_svd_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_t_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_along_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_along_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_along_dim_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_along_dim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_take_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tan_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tanh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tensor_split_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tensor_split_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tensordot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tile_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_to_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_to_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_to_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_topk_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_topk_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trace_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_transpose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_transpose_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_transpose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_transpose_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trapezoid_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trapezoid_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trapezoid_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trapz_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_tril_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_triu_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_true_divide_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trunc_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_trunc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unbind_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unbind_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unbind_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unbind_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unflatten_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unflatten_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unflatten_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unfold_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unfold_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unfold_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unfold_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_uniform_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unique_consecutive_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unique_consecutive_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unique_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unique_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unique_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unique_cuda_uint16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unravel_index_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unravel_index_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsafe_chunk_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsafe_split_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsafe_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsqueeze_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsqueeze_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_unsqueeze_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_mean_unbiased_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_var_mean_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vdot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_as_complex_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_as_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_view_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vsplit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vsplit_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vsplit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vstack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_vstack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_where_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_where_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_where_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_xlogy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_xlogy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_xlogy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_xlogy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_xlogy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zero__cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zero__cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zeros_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zeros_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zeros_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zeros_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zeros_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_meta_outplace_zeros_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_H_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_H_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_T_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_T_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___getitem___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___getitem___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___getitem___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rdiv___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rdiv___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rdiv___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmatmul___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmatmul___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmod___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmod___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmul___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rmul___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___ror___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rpow___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rsub___cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rsub___cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace___rsub___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__batch_norm_with_update_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__chunk_cat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__chunk_cat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__chunk_cat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__chunk_cat_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_abs_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_abs_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_add_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_add_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcdiv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcdiv_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcmul_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcmul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcmul_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcmul_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_addcmul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_asin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_asin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_asin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_asin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_asin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_atan_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_atan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_atan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_atan_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_ceil_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_ceil_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_clamp_max_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_clamp_max_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_clamp_max_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_clamp_min_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cos_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cosh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_cosh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_div_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_div_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_erf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_erf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_erf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_erfc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_exp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_exp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_expm1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_expm1_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_expm1_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_floor_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_floor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_floor_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_frac_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_lerp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_lgamma_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log10_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log10_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log10_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log1p_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log1p_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_log_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_max_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_max_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_max_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_max_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_minimum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_minimum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_mul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_mul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_mul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_mul_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_neg_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_neg_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_norm_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_norm_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_norm_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_pow_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_pow_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_pow_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_pow_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_pow_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_reciprocal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_reciprocal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_round_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_round_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sigmoid_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sigmoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sign_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sin_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sqrt_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sqrt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sub_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sub_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sub_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_sub_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_tanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_tanh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_tanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_tanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_trunc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_zero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_zero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_zero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_zero_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_zero_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__foreach_zero_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__segment_reduce_offsets_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__softmax_backward_data_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__unsafe_masked_index_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__unsafe_masked_index_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__unsafe_masked_index_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__unsafe_masked_index_put_accumulate_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace__unsafe_masked_index_put_accumulate_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_abs_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_abs_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_abs_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_abs_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_acos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_acos_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_acos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_acos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_acosh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_add_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addbmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addcmul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addr_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_addr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_alias_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_alias_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_alias_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides___rpow___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_atan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_frac_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_log10_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_maximum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_sigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__foreach_tan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides__segment_reduce_lengths_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_addmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_addmv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_addr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_aminmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_argmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_atan2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_baddbmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_bitwise_or_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_bitwise_xor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_block_diag_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_bmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_ceil_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_cholesky_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_complex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_cummax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_diag_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_diagonal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_digamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_dist_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_empty_strided_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_equal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_erfinv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_expand_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_exponential_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_eye_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_fft_fft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_fft_ifftshift_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_fft_ihfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_fft_irfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_flatten_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_floor_divide_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_fmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_frexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_full_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_gather_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_ge_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_geometric_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_grid_sampler_2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_gt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_hsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_imag_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_index_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_index_reduce_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_inner_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_int_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_item_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_jiterator_2inputs_2outputs_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_lgamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_det_singular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_diagonal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_eigh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_eigvals_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_ldl_factor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_lstsq_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_matrix_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_matrix_power_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_matrix_rank_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_svd_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linalg_tensorinv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_linspace_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_log_normal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_logical_and_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_logical_not_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_logical_xor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_lu_unpack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_masked_argmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_masked_median_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_max_pool2d_with_indices_backward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_min_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_min_reduction_with_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_mode_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_movedim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_msort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nanmean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nanmedian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_narrow_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_new_full_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_new_ones_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_adaptive_avg_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_avg_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_celu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_conv1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_conv3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_conv_transpose1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_feature_alpha_dropout_with_train_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_fractional_max_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_fractional_max_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_interpolate_area_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_interpolate_linear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_interpolate_nearest-exact_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_l1_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_logsigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_max_unpool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_mse_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_normalize_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_relu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_silu_complex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_silu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_softsign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nn_functional_upsample_nearest_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_nonzero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_norm_nuc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_ones_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_ones_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_polygamma_polygamma_n_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_rand_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_randn_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_round_decimals_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_scalar_tensor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_signal_windows_bartlett_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_signal_windows_gaussian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_signal_windows_nuttall_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_special_i0e_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_special_i1e_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_special_modified_bessel_i0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_special_ndtr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_split_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_split_list_args_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_split_with_sizes_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_squeeze_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_svd_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_svd_lowrank_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_take_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_tanh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_tile_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_triangular_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_unflatten_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_unravel_index_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_var_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_view_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_view_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_xlogy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_all_strides_zeros_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_amax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_amax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_amax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_amin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_aminmax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_aminmax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_angle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_angle_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_any_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_any_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_any_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_any_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_arange_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_arange_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argmax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argmin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argsort_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argwhere_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argwhere_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argwhere_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argwhere_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_argwhere_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_partial_views_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_partial_views_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_partial_views_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_as_strided_scatter_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asin_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asinh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asinh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_asinh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atan2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atan_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atanh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_1d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_2d_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_3d_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_atleast_3d_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_baddbmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bernoulli_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bfloat16_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bfloat16_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bfloat16_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_and_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_and_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_left_shift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_not_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_right_shift_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_xor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_bitwise_xor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_block_diag_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_block_diag_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_broadcast_tensors_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_broadcast_tensors_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_broadcast_to_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_broadcast_to_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_broadcast_to_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_broadcast_to_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_byte_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_byte_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cartesian_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cartesian_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cat_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cauchy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cdouble_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ceil_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cfloat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cfloat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_chalf_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_chalf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_chalf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_chalf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_char_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_char_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_char_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cholesky_inverse_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cholesky_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_chunk_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_chunk_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_max_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_min_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_min_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_min_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clamp_min_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clone_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clone_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clone_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_clone_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_column_stack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_column_stack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_column_stack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_combinations_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_combinations_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_combinations_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_combinations_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_conj_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_conj_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_conj_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_conj_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_conj_physical_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_conj_physical_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_constant_pad_nd_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_constant_pad_nd_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_constant_pad_nd_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_contiguous_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_copysign_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_copysign_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_copysign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_corrcoef_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_corrcoef_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_corrcoef_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cos_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cos_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cosh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cosh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cosh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_count_nonzero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cross_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cross_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cross_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cross_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cross_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cummax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cummax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumprod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumprod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumsum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumulative_trapezoid_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumulative_trapezoid_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_cumulative_trapezoid_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_deg2rad_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_deg2rad_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_deg2rad_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_deg2rad_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_embed_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_embed_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diag_embed_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagflat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagflat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagflat_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diagonal_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diff_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diff_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diff_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diff_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diff_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_diff_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_digamma_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dist_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_div_floor_rounding_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_div_floor_rounding_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_div_no_rounding_mode_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_div_no_rounding_mode_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_div_trunc_rounding_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_div_trunc_rounding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_div_trunc_rounding_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_double_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_double_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_double_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_double_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_double_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dsplit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dsplit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dsplit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dstack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_dstack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_einsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_einsum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_like_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_permuted_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_permuted_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_strided_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_empty_strided_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_eq_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_eq_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_equal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_equal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_erf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_erf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_erfc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_erfinv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_erfinv_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exp2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_as_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_as_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_as_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expand_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expm1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expm1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expm1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_expm1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_exponential_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fft_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftshift_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftshift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_fftshift_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_hfftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifft2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftshift_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftshift_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftshift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftshift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ifftshift_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ihfft_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ihfft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ihfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ihfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ihfftn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_ihfftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_irfftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fft_rfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fill_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_flatten_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_flip_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fliplr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fliplr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fliplr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_float_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_float_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_float_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_float_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_floor_divide_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_fmod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_frac_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_frac_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_frexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_like_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_full_like_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_gather_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_gather_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_gather_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_gather_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ge_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ge_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ge_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_geometric_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_geqrf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_geqrf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_gradient_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_grid_sampler_2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_half_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_half_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_histc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_hsplit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_hstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_i0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_i0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_i0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_imag_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_add_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_add_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_put_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_put_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_amax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_amax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_amin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_mean_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_reduce_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_select_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_select_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_select_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_select_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_index_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_int_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isclose_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isclose_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isclose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isclose_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isclose_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isfinite_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isfinite_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isinf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isinf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isnan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isnan_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isnan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isnan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isneginf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isneginf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isposinf_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isposinf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isreal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_isreal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_istft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_item_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_item_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_2inputs_2outputs_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_2inputs_2outputs_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_2inputs_2outputs_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_4inputs_with_extra_args_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_4inputs_with_extra_args_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_4inputs_with_extra_args_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_4inputs_with_extra_args_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_binary_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_binary_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_binary_return_by_ref_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_unary_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_unary_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_unary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_jiterator_unary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_kron_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_kron_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_kron_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_kron_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_kthvalue_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lcm_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ldexp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_le_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_le_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lerp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cholesky_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cholesky_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cholesky_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cholesky_ex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cond_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cross_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cross_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cross_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_cross_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_det_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_det_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_det_singular_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_diagonal_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_diagonal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_diagonal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_eig_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_eigh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_eigvalsh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_inv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_inv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_ldl_factor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_ldl_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lstsq_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lstsq_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lu_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lu_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lu_factor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lu_factor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lu_factor_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_lu_solve_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_rank_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_rank_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_rank_hermitian_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_matrix_rank_hermitian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_multi_dot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_pinv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_pinv_hermitian_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_pinv_hermitian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_slogdet_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_slogdet_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_solve_triangular_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_svdvals_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_vander_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_vecdot_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linalg_vecdot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linspace_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linspace_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linspace_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linspace_tensor_overload_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linspace_tensor_overload_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_linspace_tensor_overload_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log10_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log10_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log1p_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log_normal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log_normal_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log_softmax_with_dtype_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log_softmax_with_dtype_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log_softmax_with_dtype_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_log_softmax_with_dtype_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logaddexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logdet_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logdet_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_and_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_and_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_not_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_not_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_or_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_or_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_xor_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_xor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logical_xor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logspace_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logspace_tensor_overload_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logspace_tensor_overload_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logsumexp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logsumexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logsumexp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_logsumexp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_long_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_long_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_long_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lt_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lu_unpack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_lu_unpack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mH_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mH_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mH_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mT_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mT_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_amin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_argmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_argmax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_argmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_argmin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_cumprod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_cumprod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_cumsum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_fill_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_fill_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_fill_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_fill_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_fill_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_log_softmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_logaddexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_logsumexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_logsumexp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_mean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_median_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_prod_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_softmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_std_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_std_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_std_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_std_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_sum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_sum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_sum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_masked_var_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_matrix_exp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_matrix_exp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_matrix_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_reduction_no_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_reduction_no_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_reduction_no_dim_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_reduction_no_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_reduction_with_dim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_max_reduction_with_dim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_maximum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_median_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_list_of_tensors_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_list_of_tensors_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_list_of_tensors_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_list_of_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_list_of_tensors_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_list_of_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_variadic_tensors_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_meshgrid_variadic_tensors_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_binary_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_binary_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_reduction_no_dim_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_reduction_no_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_reduction_no_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_reduction_with_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_reduction_with_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_reduction_with_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_min_reduction_with_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mode_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_movedim_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_movedim_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_movedim_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_movedim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_movedim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_msort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_msort_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_multinomial_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_1_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nan_to_num_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nan_to_num_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nanmean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nansum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_narrow_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_narrow_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_native_batch_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_native_layer_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ne_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_neg_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_neg_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_neg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_empty_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_empty_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_empty_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_empty_strided_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_empty_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_empty_strided_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_empty_strided_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_empty_strided_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_full_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_full_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_full_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_full_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_ones_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_ones_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_ones_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_ones_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_zeros_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_zeros_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_new_zeros_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_adaptive_avg_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_adaptive_max_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_alpha_dropout_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_batch_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_batch_norm_without_cudnn_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_binary_cross_entropy_with_logits_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_channel_shuffle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_channel_shuffle_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_channel_shuffle_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv2d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose1d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose1d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose2d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose2d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_conv_transpose3d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_cosine_embedding_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_cosine_embedding_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_dropout2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_dropout2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_dropout3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_elu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_elu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_embedding_bag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_embedding_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_feature_alpha_dropout_with_train_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_fractional_max_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_gaussian_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_gaussian_nll_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_group_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_hardsigmoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_hardsigmoid_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_hardtanh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_instance_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_instance_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_interpolate_area_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_interpolate_bicubic_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_interpolate_bicubic_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_interpolate_linear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_interpolate_trilinear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_kl_div_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_kl_div_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_kl_div_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_l1_loss_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_l1_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_layer_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_leaky_relu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_linear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_linear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_margin_ranking_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_margin_ranking_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_pool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_pool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_pool2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_pool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_unpool1d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_unpool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_unpool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_unpool3d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_max_unpool3d_grad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_mish_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_mish_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_multi_head_attention_forward_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_multilabel_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_normalize_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_normalize_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_circular_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_circular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_circular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_circular_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_circular_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_circular_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_constant_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_constant_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_constant_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_constant_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_reflect_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_reflect_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_reflect_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_replicate_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_replicate_negative_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pad_replicate_negative_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pairwise_distance_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pairwise_distance_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pixel_shuffle_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pixel_shuffle_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pixel_shuffle_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pixel_shuffle_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_pixel_unshuffle_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_poisson_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_poisson_nll_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_poisson_nll_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_poisson_nll_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_relu6_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_relu6_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_relu6_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_relu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_relu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_relu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_relu_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_scaled_dot_product_attention_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_selu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_soft_margin_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_softmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_softmin_with_dtype_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_softplus_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_tanhshrink_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_tanhshrink_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_tanhshrink_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_threshold_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_threshold_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_threshold_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_triplet_margin_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_triplet_margin_with_distance_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nn_functional_unfold_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nonzero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nonzero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nonzero_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nonzero_static_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nonzero_static_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_nonzero_static_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_norm_fro_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_norm_inf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_normal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_normal_in_place_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_normal_in_place_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ones_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ones_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ormqr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_outer_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_outer_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_outer_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_outer_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_pca_lowrank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_permute_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_permute_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_pinverse_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polar_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polar_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_3_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_4_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_4_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_polygamma_polygamma_n_4_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_positive_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_pow_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_pow_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_pow_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_pow_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_put_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_put_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_put_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_put_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_put_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_qr_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_qr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rad2deg_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rad2deg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rad2deg_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rand_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randint_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randint_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randint_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randint_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randn_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_randn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_ravel_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_real_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_real_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reciprocal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_remainder_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_remainder_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_renorm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_renorm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_renorm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_renorm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_repeat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_repeat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_repeat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_repeat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_repeat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_repeat_interleave_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_repeat_interleave_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_as_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_as_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_as_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_reshape_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resize_as__cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resize_as__cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resolve_conj_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resolve_conj_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resolve_conj_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resolve_conj_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resolve_neg_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resolve_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_resolve_neg_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_roll_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rot90_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rot90_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rot90_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rot90_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_round_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_round_decimals_0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_round_decimals_0_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_round_decimals_neg_3_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rsqrt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rsqrt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_rsqrt_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scalar_tensor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_add_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_amax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_amax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_amin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_amin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_prod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_sum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_sum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_sum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_scatter_reduce_sum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_searchsorted_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_searchsorted_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_searchsorted_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_select_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sgn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sgn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sgn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_short_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_short_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sigmoid_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sigmoid_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sign_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sign_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sign_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signal_windows_cosine_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signal_windows_exponential_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signal_windows_gaussian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signal_windows_general_cosine_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signbit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_signbit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sinh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_slice_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_slice_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_slice_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_slice_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_slice_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_slice_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_slice_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_softmax_with_dtype_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_softmax_with_dtype_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_softmax_with_dtype_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_airy_ai_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_j1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_bessel_y0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_t_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_u_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_v_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_w_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_chebyshev_polynomial_w_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_entr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_entr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_erfcx_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_erfcx_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_hermite_polynomial_h_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_hermite_polynomial_he_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_hermite_polynomial_he_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_i0e_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_i0e_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_i1e_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_laguerre_polynomial_l_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_laguerre_polynomial_l_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_laguerre_polynomial_l_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_legendre_polynomial_p_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_log_ndtr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_i0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_i0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_i1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_k0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_modified_bessel_k1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_ndtr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_ndtri_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_scaled_modified_bessel_k0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_scaled_modified_bessel_k0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_scaled_modified_bessel_k1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_scaled_modified_bessel_k1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_scaled_modified_bessel_k1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_t_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_t_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_v_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_spherical_bessel_j0_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_spherical_bessel_j0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_spherical_bessel_j0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_xlog1py_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_xlog1py_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_xlog1py_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_xlog1py_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_xlog1py_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_xlog1py_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_zeta_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_zeta_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_zeta_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_zeta_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_special_zeta_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_list_args_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_split_with_sizes_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sqrt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_squeeze_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_squeeze_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_squeeze_multiple_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_squeeze_multiple_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_stack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_stack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_stack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_stack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_stack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_stack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_stack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_std_mean_unbiased_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_std_unbiased_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_std_unbiased_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sub_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sub_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_to_size_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_to_size_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_sum_to_size_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_svd_lowrank_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_svd_lowrank_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_t_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_t_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_t_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_t_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_t_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_take_along_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_take_along_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_take_along_dim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_take_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tensor_split_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tensor_split_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tensor_split_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tensor_split_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tile_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tile_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tile_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_sparse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_sparse_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_to_sparse_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_topk_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_topk_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trace_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trace_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trace_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trace_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_transpose_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_transpose_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_transpose_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_transpose_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trapezoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trapezoid_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trapz_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trapz_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trapz_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_triangular_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tril_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tril_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_tril_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_triu_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_triu_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_true_divide_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_true_divide_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trunc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trunc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_trunc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unbind_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unbind_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unbind_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unflatten_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unflatten_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unfold_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_uniform_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unique_consecutive_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unique_consecutive_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unique_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unique_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unique_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unravel_index_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unravel_index_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_chunk_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_chunk_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_split_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_split_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_split_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsafe_split_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsqueeze_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsqueeze_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_unsqueeze_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_var_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_var_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_var_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_var_unbiased_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_var_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_var_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vdot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vdot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_as_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_as_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_as_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_as_real_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_view_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vsplit_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vsplit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_vstack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_where_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_xlogy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_xlogy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zero__cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zeros_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zeros_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zeros_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zeros_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zeros_like_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_inplace_zeros_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_H_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_H_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_H_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_H_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_T_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_T_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___getitem___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___getitem___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___getitem___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___getitem___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___radd___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___radd___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rand___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rand___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rand___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rdiv___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rdiv___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rdiv___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rdiv___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmatmul___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmod___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmod___cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmod___cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmod___cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmod___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmod___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmul___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rmul___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___ror___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rpow___cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rpow___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rsub___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rsub___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rsub___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rxor___cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace___rxor___cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__batch_norm_with_update_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__chunk_cat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__chunk_cat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_abs_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_abs_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_abs_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_abs_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_acos_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_acos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_acos_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_add_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_add_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_addcdiv_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_addcmul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_asin_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_asin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_asin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_atan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_atan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_ceil_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_ceil_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_ceil_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_clamp_max_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_clamp_max_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_clamp_min_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_clamp_min_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cos_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cos_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cosh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cosh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_cosh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_div_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_div_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_erf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_erf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_erf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_erf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_erfc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_exp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_exp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_exp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_expm1_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_floor_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_floor_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_floor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_floor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_frac_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_frac_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_frac_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_frac_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_frac_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_frac_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lerp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lerp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lerp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lerp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_lgamma_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log10_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log10_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log10_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_log_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_max_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_max_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_max_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_maximum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_maximum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_maximum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_minimum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_minimum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_mul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_neg_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_neg_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_neg_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_norm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_norm_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_pow_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_reciprocal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_reciprocal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_round_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_round_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_round_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sigmoid_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sign_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sign_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sign_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sqrt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sqrt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sqrt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sqrt_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sub_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sub_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_sub_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_tan_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_tan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_tan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_tan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_tanh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_tanh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_trunc_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_trunc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_trunc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_trunc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_zero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_zero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__foreach_zero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__segment_reduce_offsets_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__softmax_backward_data_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__unsafe_masked_index_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__unsafe_masked_index_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__unsafe_masked_index_put_accumulate_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__unsafe_masked_index_put_accumulate_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__unsafe_masked_index_put_accumulate_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__upsample_bilinear2d_aa_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace__upsample_bilinear2d_aa_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_abs_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_abs_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acos_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acos_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acos_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acosh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_acosh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addcmul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addmm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addmm_decomposed_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addmm_decomposed_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addmv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addmv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_addr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_alias_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_alias_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_T_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides___rand___cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides___rmod___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides___rmul___cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_abs_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_addcdiv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_asin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_clamp_min_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_cos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_cosh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_log2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_sign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_sub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__foreach_zero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__native_batch_norm_legit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__softmax_backward_data_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides__upsample_bilinear2d_aa_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_addr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_argmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_argwhere_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_asin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_atleast_2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_baddbmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_bernoulli_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_bitwise_not_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_bitwise_right_shift_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_broadcast_shapes_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_cdist_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_cdouble_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_conj_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_cummin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_diag_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_empty_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_eq_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_expm1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fft_hfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fft_hfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_flip_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_fmod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_gradient_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_grid_sampler_2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_imag_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_index_reduce_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_index_reduce_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_isfinite_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_isinf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_isneginf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_jiterator_4inputs_with_extra_args_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_le_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_cond_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_eigh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_lstsq_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_matrix_power_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_matrix_rank_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_pinv_hermitian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_linalg_tensorsolve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_log2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_log_softmax_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_logcumsumexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_logical_not_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_lt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_lu_unpack_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_mT_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_masked_cumprod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_masked_cumsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_masked_log_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_masked_median_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_masked_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_masked_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_masked_sum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_matrix_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_max_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_max_reduction_no_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_max_reduction_with_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_min_reduction_with_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nanmean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_narrow_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_new_empty_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_new_full_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_adaptive_avg_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_binary_cross_entropy_with_logits_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_conv3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_dropout2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_dropout3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_embedding_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_interpolate_area_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_interpolate_bilinear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_layer_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_logsigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_margin_ranking_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_max_unpool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_max_unpool2d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_multi_head_attention_forward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_relu6_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_scaled_dot_product_attention_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_softmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_softsign_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_tanhshrink_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_threshold_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_nn_functional_upsample_nearest_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_norm_fro_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_norm_nuc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_outer_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_pow_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_put_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_randn_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_remainder_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_repeat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_resize_as__cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_rot90_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_round_decimals_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_scalar_tensor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_scatter_reduce_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_signal_windows_blackman_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_signal_windows_hann_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_slice_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_airy_ai_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_bessel_y0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_chebyshev_polynomial_w_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_log_ndtr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_modified_bessel_k1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_special_shifted_chebyshev_polynomial_t_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_std_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_std_mean_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_std_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_t_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_take_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_tan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_tensor_split_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_to_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_to_sparse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_triangular_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_tril_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_unbind_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_unique_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_unsqueeze_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_view_as_complex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_where_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_zero__cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_all_strides_zeros_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_allclose_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_allclose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_amin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_aminmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_aminmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_angle_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_any_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_any_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_arange_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_arange_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_arange_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_arange_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argmin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argmin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argsort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argsort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argsort_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argsort_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argwhere_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argwhere_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_argwhere_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_partial_views_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_partial_views_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_partial_views_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_as_strided_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_asin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_asinh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_asinh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atanh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atanh_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_1d_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_2d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_3d_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_atleast_3d_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_baddbmm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bfloat16_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_and_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_and_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_and_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_left_shift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_or_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_or_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_or_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_right_shift_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_right_shift_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bitwise_xor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_block_diag_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bool_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_bool_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_tensors_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_tensors_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_tensors_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_to_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_to_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_to_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_to_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_broadcast_to_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_byte_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cartesian_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cartesian_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cat_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cdouble_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ceil_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cfloat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cfloat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cfloat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_chalf_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_chalf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_char_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_char_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cholesky_inverse_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cholesky_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_chunk_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_chunk_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clamp_max_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clone_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_clone_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_column_stack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_column_stack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_column_stack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_combinations_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_combinations_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_complex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_physical_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_physical_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_conj_physical_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_constant_pad_nd_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_constant_pad_nd_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_constant_pad_nd_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_constant_pad_nd_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_contiguous_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_contiguous_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_contiguous_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_copysign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_corrcoef_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_corrcoef_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_corrcoef_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cos_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cosh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cosh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cosh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cosh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_count_nonzero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_count_nonzero_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_count_nonzero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_count_nonzero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cov_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cov_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cross_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cross_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cummax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cummax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cummin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cummin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cumprod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cumsum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cumulative_trapezoid_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cumulative_trapezoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_cumulative_trapezoid_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_deg2rad_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_embed_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_embed_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diag_embed_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagflat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagflat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagflat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diagonal_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diff_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diff_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_diff_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_digamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_digamma_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dist_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_floor_rounding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_floor_rounding_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_no_rounding_mode_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_no_rounding_mode_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_no_rounding_mode_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_no_rounding_mode_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_trunc_rounding_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_trunc_rounding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_div_trunc_rounding_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_double_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dstack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dstack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dstack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_dstack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_einsum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_permuted_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_permuted_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_strided_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_strided_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_strided_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_empty_strided_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_eq_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_eq_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_equal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_equal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_equal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_equal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erfc_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erfc_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_erfinv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exp2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exp2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exp2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exp2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exp2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_as_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expand_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expm1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_expm1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_exponential_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_eye_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fft2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fftshift_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fftshift_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_fftshift_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfft2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_hfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifft_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifftn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ifftshift_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_ihfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_irfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_irfft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_irfft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_irfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_irfft_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_irfft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_irfftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_rfft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fft_rfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fill_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fill_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fill_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fill_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flatten_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flatten_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fliplr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fliplr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fliplr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fliplr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fliplr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fliplr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flipud_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flipud_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_flipud_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_float_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_float_power_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_floor_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_floor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_floor_divide_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_fmin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_frac_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_frac_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_frexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_full_like_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gather_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gather_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gather_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gather_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ge_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ge_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_geometric_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_geometric_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gradient_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gradient_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gradient_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_grid_sampler_2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_gt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_half_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_half_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_half_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_half_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_half_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_half_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_heaviside_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_heaviside_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_hsplit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_hsplit_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_hsplit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_hstack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_hstack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_i0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_i0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_i0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_imag_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_add_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_put_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_put_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_put_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_amax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_amin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_amin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_amin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_amin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_reduce_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_select_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_select_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_select_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_index_select_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_int_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_int_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_int_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isclose_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isclose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isfinite_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isfinite_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isinf_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isinf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isinf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isnan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isnan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isnan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isnan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isneginf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isneginf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isneginf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isposinf_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isposinf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_isreal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_item_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_2inputs_2outputs_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_4inputs_with_extra_args_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_4inputs_with_extra_args_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_4inputs_with_extra_args_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_4inputs_with_extra_args_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_binary_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_binary_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_binary_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_binary_return_by_ref_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_unary_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_unary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_jiterator_unary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kron_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kron_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kron_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kron_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kthvalue_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_kthvalue_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lcm_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lcm_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ldexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ldexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ldexp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ldexp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_le_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_le_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_le_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lerp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lgamma_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lgamma_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_cholesky_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_cond_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_cross_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_cross_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_det_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_det_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_diagonal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_diagonal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_eig_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_eig_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_eigvals_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_eigvals_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_eigvals_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_householder_product_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_householder_product_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_inv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_inv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_ldl_factor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_ldl_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_lu_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_lu_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_lu_factor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_lu_factor_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_lu_solve_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_matrix_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_matrix_rank_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_matrix_rank_hermitian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_norm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_norm_subgradients_at_zero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_norm_subgradients_at_zero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_pinv_hermitian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_pinv_singular_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_qr_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_solve_triangular_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_vander_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_vander_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_vander_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_vecdot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_vecdot_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linalg_vecdot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linspace_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linspace_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linspace_tensor_overload_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linspace_tensor_overload_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_linspace_tensor_overload_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log10_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log10_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log10_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log1p_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log1p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log1p_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_softmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_softmax_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_softmax_with_dtype_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_log_softmax_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logaddexp2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logcumsumexp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logcumsumexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logcumsumexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logdet_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logdet_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_not_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_not_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_not_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_or_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_xor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logical_xor_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logspace_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logspace_tensor_overload_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logspace_tensor_overload_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logspace_tensor_overload_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logsumexp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_logsumexp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_long_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_long_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_long_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lt_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_lu_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mH_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mT_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mT_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mT_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mT_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_amax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_amin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_amin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_amin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_argmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_argmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_argmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_argmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_argmin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumprod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumsum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_cumsum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_fill_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_fill_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_fill_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_logaddexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_logaddexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_logsumexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_logsumexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_logsumexp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_mean_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_median_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_normalize_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_normalize_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_normalize_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_normalize_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_prod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_select_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_select_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_softmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_std_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_sum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_var_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_var_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_masked_var_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_matrix_exp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_binary_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_binary_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_binary_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_pool2d_with_indices_backward_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_pool2d_with_indices_backward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_reduction_no_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_reduction_no_dim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_max_reduction_no_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_maximum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_median_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_meshgrid_list_of_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_meshgrid_list_of_tensors_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_meshgrid_variadic_tensors_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_binary_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_binary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_reduction_no_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_reduction_no_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_reduction_with_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_reduction_with_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_min_reduction_with_dim_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mode_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mode_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mode_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mode_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mode_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_movedim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_movedim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_msort_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mul_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_multinomial_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nan_to_num_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nan_to_num_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nanmean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nanmedian_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nansum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nansum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nansum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nansum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_narrow_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_narrow_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_narrow_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_narrow_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_narrow_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_narrow_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_native_batch_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_native_dropout_backward_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_native_dropout_backward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_native_dropout_backward_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_neg_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_neg_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_empty_strided_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_empty_strided_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_empty_strided_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_full_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_full_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_full_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_full_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_ones_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_ones_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_ones_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_zeros_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_new_zeros_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nextafter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_adaptive_avg_pool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_adaptive_avg_pool3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_adaptive_max_pool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_adaptive_max_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_adaptive_max_pool3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_avg_pool2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_batch_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_batch_norm_without_cudnn_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_bilinear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_bilinear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_binary_cross_entropy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_binary_cross_entropy_with_logits_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_channel_shuffle_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_channel_shuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_channel_shuffle_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_channel_shuffle_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_conv_transpose1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_cosine_embedding_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_dropout3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_dropout_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_elu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_elu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_embedding_bag_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_embedding_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_embedding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_fractional_max_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_gaussian_nll_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_gelu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_grid_sample_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_grid_sample_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_group_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_hardswish_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_hardtanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_hardtanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_huber_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_instance_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_instance_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_area_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_area_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_area_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_linear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_nearest-exact_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_nearest-exact_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_nearest_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_nearest_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_trilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_trilinear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_interpolate_trilinear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_l1_loss_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_leaky_relu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_leaky_relu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_linear_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_linear_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_local_response_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_logsigmoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_logsigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_margin_ranking_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_margin_ranking_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_margin_ranking_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_pool1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_unpool1d_grad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_max_unpool2d_grad_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_mish_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_mish_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_mse_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_multi_head_attention_forward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_multi_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_multilabel_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_normalize_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_normalize_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_normalize_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_circular_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_circular_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_circular_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_circular_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_circular_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_constant_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_constant_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_replicate_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_replicate_negative_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pad_replicate_negative_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pairwise_distance_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_unshuffle_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_unshuffle_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_unshuffle_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_pixel_unshuffle_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_poisson_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_poisson_nll_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_poisson_nll_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_poisson_nll_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_prelu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_prelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_relu6_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_relu6_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_relu6_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_relu6_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_relu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_relu_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_relu_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_rms_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_rms_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_rrelu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_rrelu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_selu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_silu_complex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_silu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_silu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_smooth_l1_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softmin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softmin_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softmin_with_dtype_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softshrink_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softsign_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softsign_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softsign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_softsign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_tanhshrink_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_tanhshrink_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_tanhshrink_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_tanhshrink_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_threshold_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_threshold_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_threshold_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nn_functional_unfold_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_static_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_static_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_static_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_static_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_nonzero_static_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_fro_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_inf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_norm_nuc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_normal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_normal_number_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ones_like_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_outer_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_outer_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_outer_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_permute_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_permute_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_permute_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_permute_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_0_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_2_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_3_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_3_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_4_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_polygamma_polygamma_n_4_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_positive_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_positive_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_positive_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pow_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pow_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pow_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_pow_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_put_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_put_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_put_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_put_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rad2deg_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rand_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rand_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randint_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randint_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randint_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randint_like_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randn_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randn_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_randn_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ravel_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ravel_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_ravel_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_real_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_real_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reciprocal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reciprocal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reciprocal_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_reciprocal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_remainder_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_remainder_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_remainder_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_remainder_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_repeat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_repeat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_repeat_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_repeat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_repeat_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize__cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize__cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize__cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize__cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize__cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize_as__cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resize_as__cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resolve_conj_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resolve_neg_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resolve_neg_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_resolve_neg_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_roll_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_roll_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_roll_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rot90_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rot90_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_round_decimals_0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_round_decimals_0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_round_decimals_3_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_round_decimals_neg_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rsqrt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rsqrt_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rsqrt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rsub_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_rsub_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scalar_tensor_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scalar_tensor_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scalar_tensor_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scalar_tensor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scalar_tensor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_add_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_add_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_amax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_amax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_amax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_amin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_amin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_prod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_sum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_sum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_scatter_reduce_sum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_searchsorted_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_select_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sgn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sgn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sgn_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sgn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_short_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_short_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_short_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sigmoid_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sign_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signal_windows_bartlett_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signal_windows_blackman_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signal_windows_cosine_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signal_windows_nuttall_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signbit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signbit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_signbit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinc_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sinh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_slice_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_slice_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_slice_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_slice_scatter_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_slice_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_softmax_with_dtype_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_softmax_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_softmax_with_dtype_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_softmax_with_dtype_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sort_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sparse_mm_reduce_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_j0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_j0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_j1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_j1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_bessel_y0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_chebyshev_polynomial_t_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_chebyshev_polynomial_t_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_chebyshev_polynomial_t_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_chebyshev_polynomial_w_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_chebyshev_polynomial_w_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_entr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_entr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_erfcx_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_erfcx_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i0e_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i0e_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i1e_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i1e_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_i1e_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_laguerre_polynomial_l_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_laguerre_polynomial_l_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_laguerre_polynomial_l_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_legendre_polynomial_p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_legendre_polynomial_p_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_legendre_polynomial_p_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_log_ndtr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_log_ndtr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_i0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_i0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_i1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_i1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_i1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_i1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_modified_bessel_k0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_ndtr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_ndtri_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_scaled_modified_bessel_k0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_scaled_modified_bessel_k1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_scaled_modified_bessel_k1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_scaled_modified_bessel_k1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_scaled_modified_bessel_k1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_t_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_u_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_u_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_u_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_v_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_v_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_w_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_w_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_shifted_chebyshev_polynomial_w_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_spherical_bessel_j0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_spherical_bessel_j0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_xlog1py_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_xlog1py_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_xlog1py_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_zeta_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_special_zeta_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_list_args_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_with_sizes_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_with_sizes_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_split_with_sizes_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sqrt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sqrt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_square_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_square_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_square_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_squeeze_multiple_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_stack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_stack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_stack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_std_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_std_mean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_stft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sub_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_to_size_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_to_size_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_to_size_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_to_size_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_sum_to_size_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_t_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_t_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_t_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_take_along_dim_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_take_along_dim_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_take_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_take_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_take_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_take_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_take_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tanh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tensor_split_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tensor_split_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tensor_split_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tensordot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tile_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tile_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_to_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_to_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_to_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_to_sparse_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_to_sparse_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_to_sparse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_to_sparse_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_topk_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_torch__scaled_mm_cuda_float8_e4m3fn, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trace_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trace_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_transpose_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_transpose_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_transpose_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_transpose_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_transpose_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trapezoid_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trapezoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trapz_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_trapz_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_triangular_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tril_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tril_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_tril_indices_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_triu_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_true_divide_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_true_divide_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_true_divide_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_true_divide_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unbind_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unbind_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unbind_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unbind_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unbind_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unflatten_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unflatten_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unflatten_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unflatten_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unfold_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unfold_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unfold_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unfold_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unfold_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unfold_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unfold_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unfold_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_uniform_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unique_consecutive_cuda_bool, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unique_consecutive_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unique_consecutive_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unique_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unique_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unravel_index_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unravel_index_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_chunk_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_chunk_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_chunk_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_chunk_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsafe_split_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_unsqueeze_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_mean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_mean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_var_unbiased_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vdot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vdot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_as_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_as_cuda_float64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_as_real_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_view_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vsplit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vsplit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vsplit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vstack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vstack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_vstack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_where_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_where_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_where_cuda_int16, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_xlogy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_zeros_cuda_int64, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_zeros_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_zeros_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_dispatch_symbolic_meta_outplace_zeros_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_empty_quantized_cuda, test/test_meta.py::TestMetaCUDA::test_layer_norm_backward_output_mask0_cuda, test/test_meta.py::TestMetaCUDA::test_layer_norm_backward_output_mask4_cuda, test/test_meta.py::TestMetaCUDA::test_layer_norm_backward_output_mask5_cuda, test/test_meta.py::TestMetaCUDA::test_meta__fused_moving_avg_obs_fq_helper_cuda, test/test_meta.py::TestMetaCUDA::test_meta_inplace_H_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_H_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_T_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace___radd___cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace___radd___cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace___radd___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rdiv___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rdiv___cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rdiv___cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rmatmul___cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rmod___cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rmod___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rmul___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace___ror___cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rpow___cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace___rpow___cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__batch_norm_with_update_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__chunk_cat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__chunk_cat_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__chunk_cat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__chunk_cat_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_acos_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_acos_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_acos_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_add_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_add_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcdiv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcdiv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcdiv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcdiv_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcmul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_addcmul_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_asin_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_asin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_atan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_atan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_ceil_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_ceil_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_ceil_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_ceil_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_max_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_max_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_max_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_max_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_min_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_clamp_min_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_cos_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_erf_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_erf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_erf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_exp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_expm1_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_expm1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_expm1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_floor_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_floor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_frac_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_lerp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_lerp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_lgamma_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_lgamma_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log10_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log10_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log1p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log1p_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log1p_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_log_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_max_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_maximum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_minimum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_minimum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_minimum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_mul_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_mul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_mul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_norm_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_pow_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_pow_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_pow_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_reciprocal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_reciprocal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_round_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_round_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_round_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sigmoid_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sigmoid_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sigmoid_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sigmoid_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sign_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sinh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sinh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sinh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sqrt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sqrt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_sub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_tan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_tanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_tanh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_tanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_tanh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_trunc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_trunc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_zero_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__foreach_zero_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__native_batch_norm_legit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__segment_reduce_lengths_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__segment_reduce_offsets_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__softmax_backward_data_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__unsafe_masked_index_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace__unsafe_masked_index_put_accumulate_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace__unsafe_masked_index_put_accumulate_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace__unsafe_masked_index_put_accumulate_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace__upsample_bilinear2d_aa_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_abs_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_acos_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_acos_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_acosh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_acosh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_acosh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_add_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_add_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addbmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addcdiv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addcdiv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addcmul_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addcmul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addcmul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addcmul_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addmm_decomposed_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addr_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_addr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_alias_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_alias_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_all_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_amax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_amin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_aminmax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_angle_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_angle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_any_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_any_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_any_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argmax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argmin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argsort_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argsort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_argwhere_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_partial_views_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_partial_views_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_partial_views_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_partial_views_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_as_strided_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asinh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_asinh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atan2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atanh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atanh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atanh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_1d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_1d_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_1d_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_2d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_2d_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_2d_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_3d_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_atleast_3d_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_baddbmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bernoulli_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bfloat16_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bfloat16_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bfloat16_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bfloat16_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_and_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_or_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_right_shift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_right_shift_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_xor_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bitwise_xor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bmm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bool_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bool_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bool_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_broadcast_tensors_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_broadcast_tensors_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_broadcast_to_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_broadcast_to_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_broadcast_to_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_broadcast_to_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bucketize_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_bucketize_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_byte_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cartesian_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cat_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cat_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cdouble_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cdouble_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cdouble_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ceil_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ceil_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cfloat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cfloat_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cfloat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cfloat_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chalf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_char_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_char_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cholesky_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chunk_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chunk_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chunk_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chunk_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_chunk_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_max_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_max_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_min_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clamp_min_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clone_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clone_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clone_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_clone_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_column_stack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_column_stack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_combinations_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_combinations_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_combinations_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_conj_physical_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_conj_physical_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_constant_pad_nd_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_copysign_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_copysign_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_corrcoef_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_corrcoef_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_corrcoef_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_count_nonzero_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_count_nonzero_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cov_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cov_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cov_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cross_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cross_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cummax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cummax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cummin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cummin_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cumsum_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cumsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cumsum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_cumsum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_deg2rad_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diag_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diag_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diag_embed_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diag_embed_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diag_embed_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diag_embed_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagflat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagflat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diagonal_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_diff_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dist_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dist_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dist_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_floor_rounding_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_floor_rounding_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_no_rounding_mode_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_no_rounding_mode_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_no_rounding_mode_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_no_rounding_mode_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_no_rounding_mode_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_trunc_rounding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_div_trunc_rounding_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dot_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_double_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dsplit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dsplit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dsplit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_dstack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_einsum_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_einsum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_permuted_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_permuted_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_empty_strided_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_eq_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_equal_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_equal_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_erf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_erf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_erfc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_erfinv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_erfinv_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exp2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exp2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_as_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_as_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_as_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expand_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expm1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expm1_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expm1_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_expm1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exponential_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_exponential_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_eye_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_eye_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_eye_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_eye_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fft2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fftn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fftshift_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_fftshift_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfft_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfftn_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfftn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_hfftn_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ifft2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ifft2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ifft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ifftshift_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ifftshift_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfftn_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfftn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_ihfftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfft2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfftn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_irfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_rfft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fft_rfft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fill_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fill_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flatten_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flatten_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flatten_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flatten_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flip_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fliplr_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fliplr_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fliplr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fliplr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_flipud_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_power_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_power_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_float_power_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_floor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_floor_divide_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_fmin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_full_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_full_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_full_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ge_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ge_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ge_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_geqrf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_geqrf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_geqrf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_grid_sampler_2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_grid_sampler_2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_gt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_gt_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_half_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_half_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_heaviside_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_histc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_histc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_histc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_histc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_histc_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_histc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hsplit_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hsplit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hsplit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hstack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hstack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hstack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hypot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hypot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_hypot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_i0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_igamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_igammac_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_igammac_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_fill_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_fill_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_fill_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_fill_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_amax_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_amin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_mean_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_reduce_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_select_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_index_select_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_inner_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_inner_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_int_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_int_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_int_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_int_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_int_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isfinite_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isfinite_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isinf_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isinf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isinf_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isnan_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isnan_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isnan_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isnan_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isneginf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isposinf_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isposinf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isreal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isreal_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isreal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_isreal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_istft_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_item_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_item_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_item_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_2inputs_2outputs_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_2inputs_2outputs_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_2inputs_2outputs_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_4inputs_with_extra_args_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_4inputs_with_extra_args_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_4inputs_with_extra_args_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_4inputs_with_extra_args_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_binary_return_by_ref_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_unary_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_unary_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_jiterator_unary_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_kron_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_kthvalue_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lcm_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lcm_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ldexp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ldexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ldexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_le_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_le_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lerp_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lgamma_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lgamma_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cholesky_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cholesky_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cond_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cond_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cross_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_cross_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_det_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_diagonal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_diagonal_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_diagonal_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_diagonal_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_eig_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_eigvals_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_inv_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_ldl_factor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_ldl_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lu_factor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lu_factor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lu_factor_ex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lu_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_lu_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_matrix_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_matrix_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_matrix_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_matrix_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_matrix_rank_hermitian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_matrix_rank_hermitian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_multi_dot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_multi_dot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_norm_subgradients_at_zero_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_norm_subgradients_at_zero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_norm_subgradients_at_zero_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_pinv_hermitian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_pinv_singular_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_pinv_singular_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_slogdet_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_svd_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_svdvals_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_vander_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_vander_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_vecdot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linalg_vector_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_linspace_tensor_overload_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log10_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log10_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log10_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log1p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_normal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_softmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_softmax_with_dtype_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_softmax_with_dtype_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_softmax_with_dtype_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_log_softmax_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logaddexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logaddexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_and_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_and_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_and_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_and_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_not_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_or_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_or_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_or_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_or_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_xor_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_xor_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logical_xor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logit_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logspace_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logspace_tensor_overload_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logsumexp_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logsumexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logsumexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_logsumexp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_long_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_long_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_long_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_long_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lt_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_lt_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mH_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mH_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mT_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mT_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mT_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_amax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_amax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_amin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_argmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_argmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_argmin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_argmin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_cumprod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_cumprod_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_cumprod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_cumprod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_cumsum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_cumsum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_fill_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_fill_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_logaddexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_logaddexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_logsumexp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_logsumexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_mean_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_normalize_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_normalize_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_prod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_prod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_prod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_select_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_softmax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_softmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_std_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_std_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_sum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_sum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_var_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_var_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_masked_var_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_matmul_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_matmul_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_matrix_exp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_binary_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_reduction_no_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_reduction_no_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_reduction_no_dim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_reduction_with_dim_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_reduction_with_dim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_reduction_with_dim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_max_reduction_with_dim_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_maximum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_maximum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_median_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_list_of_tensors_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_list_of_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_list_of_tensors_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_list_of_tensors_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_variadic_tensors_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_meshgrid_variadic_tensors_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_min_binary_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_min_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_min_binary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_min_reduction_no_dim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_min_reduction_with_dim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_minimum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_minimum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mode_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mode_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_movedim_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_movedim_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_msort_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_msort_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mvlgamma_mvlgamma_p_1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mvlgamma_mvlgamma_p_3_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mvlgamma_mvlgamma_p_5_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_mvlgamma_mvlgamma_p_5_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nan_to_num_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nanmean_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nanmean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nanmedian_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nanmedian_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nanquantile_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_narrow_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_native_batch_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_native_layer_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_native_layer_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_native_layer_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_neg_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_neg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_strided_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_strided_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_empty_strided_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_full_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_full_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_full_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_ones_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_ones_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_ones_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_zeros_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_new_zeros_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_adaptive_avg_pool1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_adaptive_avg_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_adaptive_avg_pool2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_adaptive_avg_pool3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_adaptive_max_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_avg_pool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_avg_pool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_batch_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_batch_norm_without_cudnn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_bilinear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_binary_cross_entropy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_binary_cross_entropy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_binary_cross_entropy_with_logits_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_binary_cross_entropy_with_logits_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_celu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_celu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_channel_shuffle_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_channel_shuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_channel_shuffle_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv1d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv2d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_conv_transpose1d_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_embedding_loss_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_embedding_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_embedding_loss_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_similarity_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_similarity_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_cosine_similarity_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_ctc_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_dropout_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_elu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_embedding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_feature_alpha_dropout_with_train_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_feature_alpha_dropout_without_train_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_fractional_max_pool2d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_fractional_max_pool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_grid_sample_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_grid_sample_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardshrink_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardsigmoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardsigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hardtanh_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_hinge_embedding_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_huber_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_huber_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_huber_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_instance_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_bicubic_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_linear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_linear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_nearest-exact_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_trilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_trilinear_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_interpolate_trilinear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_kl_div_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_l1_loss_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_layer_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_leaky_relu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_local_response_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_local_response_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_logsigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_margin_ranking_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_margin_ranking_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_pool2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_pool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_unpool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_unpool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_unpool1d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_max_unpool2d_grad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_mish_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_mish_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_mse_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_multilabel_soft_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_normalize_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_circular_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_circular_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_constant_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_reflect_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_negative_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_negative_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pad_replicate_negative_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pairwise_distance_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_shuffle_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_shuffle_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_shuffle_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_shuffle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_unshuffle_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_unshuffle_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_pixel_unshuffle_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_poisson_nll_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_poisson_nll_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_relu6_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_relu6_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_relu6_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_relu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_relu_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_relu_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_rms_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_silu_complex_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_silu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_smooth_l1_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_soft_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_soft_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softmin_with_dtype_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softmin_with_dtype_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softplus_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softshrink_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softsign_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_softsign_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_tanhshrink_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_triplet_margin_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_triplet_margin_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_triplet_margin_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_triplet_margin_with_distance_loss_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_triplet_margin_with_distance_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_unfold_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nn_functional_unfold_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_nonzero_static_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_fro_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_fro_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_inf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_norm_inf_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_normal_in_place_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_normal_in_place_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_normal_in_place_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ones_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_outer_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_outer_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_outer_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_outer_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_pca_lowrank_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_permute_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_permute_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_permute_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_permute_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polar_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_1_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_3_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_3_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_4_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_polygamma_polygamma_n_4_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_positive_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_positive_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_pow_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_pow_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_pow_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_pow_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_prod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_put_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_put_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rad2deg_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rad2deg_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rad2deg_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rad2deg_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rand_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rand_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randint_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randint_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randint_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_randn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ravel_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ravel_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ravel_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_ravel_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_real_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_real_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_real_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_real_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reciprocal_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reciprocal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_remainder_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_remainder_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_remainder_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_renorm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_interleave_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_repeat_interleave_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reshape_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reshape_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reshape_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_reshape_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resize__cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resize__cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resize__cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resize_as__cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resize_as__cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resize_as__cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_conj_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_conj_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_conj_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_conj_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_conj_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_resolve_neg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rot90_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rot90_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rot90_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_round_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_round_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_round_decimals_0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_round_decimals_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_round_decimals_3_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsqrt_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsqrt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_rsqrt_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scalar_tensor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scalar_tensor_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scalar_tensor_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_amin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_amin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_amin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_sum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_scatter_reduce_sum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_searchsorted_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_searchsorted_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_searchsorted_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_select_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_select_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_select_scatter_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_select_scatter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sgn_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_short_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sigmoid_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sigmoid_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sigmoid_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sigmoid_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signal_windows_cosine_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signal_windows_general_cosine_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_signbit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sinc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sinc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_slice_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_slice_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_slice_scatter_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_slice_scatter_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_softmax_with_dtype_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_softmax_with_dtype_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_softmax_with_dtype_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_softmax_with_dtype_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_softmax_with_dtype_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sparse_mm_reduce_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sparse_sampled_addmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sparse_sampled_addmm_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sparse_sampled_addmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_j0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_j1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_j1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_y0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_bessel_y0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_t_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_t_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_u_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_u_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_v_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_chebyshev_polynomial_w_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_entr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_entr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_erfcx_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_hermite_polynomial_h_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_hermite_polynomial_h_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_hermite_polynomial_h_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_i0e_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_i0e_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_i0e_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_i1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_i1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_i1e_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_legendre_polynomial_p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_log_ndtr_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_log_ndtr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_log_ndtr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_log_ndtr_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_i0_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_i0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_i0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_i0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_i1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_k0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_modified_bessel_k0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_ndtr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_ndtr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_ndtr_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_ndtr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_ndtri_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_polygamma_special_polygamma_n_0_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_scaled_modified_bessel_k1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_scaled_modified_bessel_k1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_scaled_modified_bessel_k1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_t_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_t_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_u_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_v_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_v_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_shifted_chebyshev_polynomial_w_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_spherical_bessel_j0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_xlog1py_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_zeta_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_special_zeta_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_list_args_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_list_args_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_list_args_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_split_with_sizes_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sqrt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sqrt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sqrt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sqrt_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_square_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_square_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_square_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_squeeze_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_squeeze_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_squeeze_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_std_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_std_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_std_mean_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_std_mean_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sub_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_sum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_svd_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_svd_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_t_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_t_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_t_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_t_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_take_along_dim_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_take_along_dim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_take_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_take_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_take_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_take_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_take_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_take_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tensor_split_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tensor_split_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tensordot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tile_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tile_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tile_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tile_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tile_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_to_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_to_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_to_sparse_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_to_sparse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_to_sparse_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_to_sparse_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_to_sparse_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_topk_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_topk_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_torch_ops_aten__efficient_attention_forward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_torch_ops_aten__flash_attention_forward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trace_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trace_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trace_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_transpose_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapezoid_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapezoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapz_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapz_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapz_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trapz_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_triangular_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tril_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tril_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_tril_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_triu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_triu_indices_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_true_divide_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_true_divide_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_true_divide_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_trunc_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unbind_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unbind_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unbind_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unbind_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unflatten_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unfold_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unfold_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unfold_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unfold_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unfold_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unfold_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unfold_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unfold_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_uniform_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_uniform_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unique_consecutive_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unique_consecutive_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unique_consecutive_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsafe_chunk_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsafe_chunk_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsafe_split_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_copy_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_unsqueeze_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_var_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_var_mean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_var_mean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_var_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_var_mean_unbiased_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vdot_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vdot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_as_real_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_copy_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_view_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vsplit_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vsplit_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vsplit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vsplit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vsplit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vstack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vstack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vstack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_vstack_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_where_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_where_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_where_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_xlogy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_xlogy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_xlogy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zero__cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zero__cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zero__cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_like_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_like_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_inplace_zeros_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_H_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_H_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_H_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_H_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___getitem___cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace___radd___cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace___radd___cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace___radd___cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___radd___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rand___cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rand___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rdiv___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rdiv___cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rdiv___cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmatmul___cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmul___cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmul___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmul___cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rmul___cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___ror___cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace___ror___cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace___ror___cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rpow___cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rsub___cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace___rsub___cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__batch_norm_with_update_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__batch_norm_with_update_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__chunk_cat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__chunk_cat_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_abs_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_acos_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_acos_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_acos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_add_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_addcdiv_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_addcdiv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_addcdiv_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_addcmul_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_addcmul_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_addcmul_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_asin_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_asin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_asin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_atan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_atan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_ceil_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_ceil_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_ceil_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_ceil_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_max_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_min_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_min_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_min_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_min_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_min_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_clamp_min_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cos_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cos_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cosh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_cosh_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_div_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_div_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_div_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_div_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erf_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erfc_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erfc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_erfc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_exp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_exp_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_exp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_expm1_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_expm1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_floor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_floor_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_frac_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_frac_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lerp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lerp_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lerp_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lgamma_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_lgamma_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log10_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log1p_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_log_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_max_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_maximum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_maximum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_maximum_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_maximum_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_minimum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_neg_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_neg_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_neg_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_norm_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_pow_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_pow_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_pow_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_round_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_round_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_round_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sign_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sign_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sin_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sinh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sinh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sqrt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sqrt_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sqrt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sqrt_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sub_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sub_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sub_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_sub_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_tan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_tan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_tanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_tanh_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_tanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_trunc_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_trunc_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__foreach_zero_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__native_batch_norm_legit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace__softmax_backward_data_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__unsafe_masked_index_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace__unsafe_masked_index_put_accumulate_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace__unsafe_masked_index_put_accumulate_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__upsample_bilinear2d_aa_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace__upsample_bilinear2d_aa_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_abs_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_abs_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_abs_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_acosh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_acosh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_acosh_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_acosh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_add_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addbmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addcdiv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addcmul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addcmul_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addcmul_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addmv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addmv_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addmv_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addr_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_addr_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_alias_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_alias_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_all_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_allclose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_allclose_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_amax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_amin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_aminmax_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_aminmax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_angle_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_angle_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_any_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_any_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_any_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_arange_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_arange_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argmin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argsort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argwhere_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argwhere_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_argwhere_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_copy_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_copy_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_partial_views_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_partial_views_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_partial_views_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_partial_views_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_as_strided_partial_views_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_asin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_asinh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan2_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atanh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_1d_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_1d_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_1d_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_2d_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_3d_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_3d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_3d_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_atleast_3d_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bfloat16_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bincount_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bitwise_and_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bitwise_and_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bitwise_and_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bitwise_left_shift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bitwise_or_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bitwise_or_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bitwise_xor_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_block_diag_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_block_diag_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_block_diag_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_block_diag_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bool_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bool_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bool_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_tensors_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_to_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_broadcast_to_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_bucketize_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_byte_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_byte_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cartesian_prod_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cartesian_prod_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cartesian_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cartesian_prod_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cartesian_prod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cat_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cdist_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cdist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cdouble_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cdouble_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cdouble_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ceil_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cfloat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_char_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_char_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cholesky_inverse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cholesky_solve_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_chunk_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_chunk_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_max_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_max_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_max_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_min_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_min_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clamp_min_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clone_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_clone_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_column_stack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_combinations_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_combinations_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_combinations_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_conj_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_conj_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_conj_physical_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_conj_physical_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_constant_pad_nd_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_constant_pad_nd_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_constant_pad_nd_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_constant_pad_nd_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_constant_pad_nd_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_contiguous_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_contiguous_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_corrcoef_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_corrcoef_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_corrcoef_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cos_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cos_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cos_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cos_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cos_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cos_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cos_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cosh_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cosh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cosh_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_count_nonzero_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cov_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cov_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cov_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cross_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cross_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cummax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cummax_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cummin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cummin_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cumprod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cumsum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cumsum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cumulative_trapezoid_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_cumulative_trapezoid_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_deg2rad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_deg2rad_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_embed_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_embed_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_embed_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diag_embed_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagflat_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagflat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagflat_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagflat_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_copy_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_copy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_scatter_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diagonal_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diff_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diff_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_diff_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_digamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_digamma_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_digamma_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_floor_rounding_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_no_rounding_mode_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_no_rounding_mode_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_no_rounding_mode_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_no_rounding_mode_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_trunc_rounding_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_div_trunc_rounding_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_double_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_double_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_double_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dsplit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dsplit_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dsplit_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dsplit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dsplit_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dsplit_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dstack_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dstack_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dstack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_dstack_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_einsum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_like_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_like_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_strided_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_strided_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_empty_strided_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_eq_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erf_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erfc_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erfc_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erfinv_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erfinv_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erfinv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erfinv_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_erfinv_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_exp2_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_exp2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_exp2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_as_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expand_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expm1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expm1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_expm1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_exponential_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_eye_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_eye_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_eye_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_eye_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_eye_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fft2_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fft2_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fft2_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fft_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fftn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fftshift_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fftshift_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fftshift_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fftshift_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fftshift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_fftshift_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_hfft2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_hfft2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_hfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_hfft_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_hfftn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_hfftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft2_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifft_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifftshift_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ifftshift_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ihfft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_ihfft_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfft2_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfft_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfftn_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfftn_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_irfftn_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_rfft2_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_rfft2_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_rfft_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_rfftn_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fft_rfftn_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fill_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flatten_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flatten_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flatten_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flatten_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flip_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flip_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flip_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flip_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fliplr_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fliplr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fliplr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flipud_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flipud_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flipud_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_flipud_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_float_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_float_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_float_power_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmax_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmax_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmod_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmod_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_fmod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_frexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_full_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_full_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_full_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_full_like_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gather_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gather_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gather_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gather_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gcd_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gcd_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ge_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ge_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ge_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gradient_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gradient_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gradient_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_grid_sampler_2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_grid_sampler_2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_gt_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_half_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_half_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_half_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_half_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_heaviside_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_heaviside_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_histc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_histc_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hsplit_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hsplit_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hsplit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hsplit_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hsplit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hstack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hypot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_hypot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_i0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_igammac_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_imag_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_add_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_fill_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_fill_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_fill_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_put_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_put_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_amax_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_amin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_mean_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_mean_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_prod_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_prod_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_prod_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_reduce_prod_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_select_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_index_select_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_int_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_int_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_int_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_int_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_int_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isclose_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isclose_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isclose_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isclose_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isfinite_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isfinite_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isin_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isinf_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isinf_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isinf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isnan_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isnan_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isneginf_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_isreal_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_item_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_item_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_item_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_2inputs_2outputs_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_2inputs_2outputs_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_2inputs_2outputs_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_2inputs_2outputs_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_4inputs_with_extra_args_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_4inputs_with_extra_args_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_binary_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_binary_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_binary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_binary_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_binary_return_by_ref_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_binary_return_by_ref_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_jiterator_binary_return_by_ref_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_kthvalue_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_kthvalue_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lcm_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ldexp_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ldexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ldexp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_le_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_le_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_le_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_le_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_le_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lerp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lgamma_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lgamma_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lgamma_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lgamma_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lgamma_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_cholesky_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_cholesky_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_cond_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_cross_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_cross_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_cross_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_cross_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_cross_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_det_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_det_singular_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_diagonal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_eig_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_eigh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_eigvals_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_eigvalsh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_eigvalsh_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_householder_product_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_inv_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_inv_ex_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_ldl_factor_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_ldl_factor_ex_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_ldl_factor_ex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_ldl_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_lstsq_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_lstsq_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_lu_factor_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_lu_factor_ex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_matrix_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_norm_subgradients_at_zero_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_norm_subgradients_at_zero_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_qr_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_qr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_slogdet_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_slogdet_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_solve_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_solve_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_solve_ex_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_solve_triangular_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_svd_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_tensorinv_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_vander_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_vander_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_vecdot_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_vecdot_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_vecdot_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_vecdot_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_vecdot_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_vector_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linalg_vector_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linspace_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linspace_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linspace_tensor_overload_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linspace_tensor_overload_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linspace_tensor_overload_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linspace_tensor_overload_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_linspace_tensor_overload_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log10_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log1p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log1p_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log1p_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log2_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log2_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log2_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_normal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_normal_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_softmax_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_softmax_with_dtype_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_softmax_with_dtype_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_log_softmax_with_dtype_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logaddexp2_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logaddexp_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logaddexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logdet_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logdet_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logdet_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_and_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_and_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_and_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_not_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_or_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_or_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_xor_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_xor_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logical_xor_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logspace_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logspace_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logsumexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_logsumexp_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_long_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_long_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_long_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lu_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lu_solve_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lu_solve_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_lu_unpack_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mH_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mT_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mT_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mT_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mT_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mT_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mT_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mT_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_amax_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_amin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_argmin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_argmin_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_cumprod_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_cumsum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_fill_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_fill_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_fill_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_log_softmax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_log_softmax_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_logaddexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_logsumexp_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_logsumexp_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_logsumexp_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_logsumexp_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_mean_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_mean_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_mean_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_median_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_prod_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_prod_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_scatter_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_scatter_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_select_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_select_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_std_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_std_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_std_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_sum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_sum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_sum_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_var_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_masked_var_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_matmul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_max_binary_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_max_binary_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_max_pool2d_with_indices_backward_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_maximum_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_median_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_median_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_median_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_median_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_list_of_tensors_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_list_of_tensors_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_list_of_tensors_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_list_of_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_list_of_tensors_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_list_of_tensors_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_variadic_tensors_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_variadic_tensors_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_variadic_tensors_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_meshgrid_variadic_tensors_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_binary_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_binary_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_binary_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_binary_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_reduction_no_dim_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_reduction_no_dim_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_reduction_no_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_reduction_with_dim_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_min_reduction_with_dim_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_minimum_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_minimum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_minimum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mode_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mode_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mode_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mode_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_movedim_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_movedim_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_msort_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_msort_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mul_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mul_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mul_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_multinomial_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_multinomial_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_multinomial_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mvlgamma_mvlgamma_p_1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mvlgamma_mvlgamma_p_3_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_mvlgamma_mvlgamma_p_5_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nan_to_num_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nan_to_num_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nan_to_num_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nan_to_num_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nanmean_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nanquantile_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nansum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nansum_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_narrow_copy_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_narrow_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_narrow_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_native_batch_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_native_dropout_backward_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_native_dropout_backward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_native_layer_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ne_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ne_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ne_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ne_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_neg_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_empty_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_empty_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_empty_strided_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_empty_strided_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_full_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_full_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_full_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_full_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_full_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_ones_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_ones_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_new_zeros_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nextafter_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_adaptive_avg_pool3d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_adaptive_max_pool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_adaptive_max_pool3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_avg_pool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_avg_pool1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_batch_norm_without_cudnn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_bilinear_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_bilinear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_binary_cross_entropy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_binary_cross_entropy_with_logits_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_celu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_celu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_channel_shuffle_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_channel_shuffle_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_channel_shuffle_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_channel_shuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_channel_shuffle_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv_transpose1d_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv_transpose1d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv_transpose2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv_transpose2d_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv_transpose2d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv_transpose2d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv_transpose3d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_conv_transpose3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_cosine_embedding_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_cosine_embedding_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_cosine_embedding_loss_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_cross_entropy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_dropout_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_dropout_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_embedding_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_embedding_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_feature_alpha_dropout_with_train_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_feature_alpha_dropout_without_train_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_fractional_max_pool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_gaussian_nll_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_gelu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_glu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_glu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_glu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_glu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_grid_sample_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_hardshrink_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_hardsigmoid_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_hardswish_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_hardtanh_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_hardtanh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_hinge_embedding_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_huber_loss_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_huber_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_interpolate_area_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_interpolate_bilinear_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_interpolate_nearest_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_interpolate_trilinear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_kl_div_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_kl_div_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_l1_loss_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_layer_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_leaky_relu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_leaky_relu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_linear_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_linear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_local_response_norm_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_local_response_norm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_local_response_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_logsigmoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_margin_ranking_loss_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_pool2d_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_pool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_pool3d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_unpool1d_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_unpool1d_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_unpool1d_grad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_unpool2d_grad_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_unpool3d_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_unpool3d_grad_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_max_unpool3d_grad_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_multi_head_attention_forward_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_multi_margin_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_multilabel_soft_margin_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_normalize_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_normalize_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_circular_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_constant_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_constant_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_constant_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_constant_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_reflect_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_replicate_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_replicate_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_replicate_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_replicate_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_replicate_negative_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_replicate_negative_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pad_replicate_negative_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pairwise_distance_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pairwise_distance_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pdist_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pixel_shuffle_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pixel_unshuffle_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pixel_unshuffle_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pixel_unshuffle_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pixel_unshuffle_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pixel_unshuffle_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_pixel_unshuffle_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_poisson_nll_loss_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_poisson_nll_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_poisson_nll_loss_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_poisson_nll_loss_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_prelu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_relu6_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_relu6_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_relu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_relu_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_rms_norm_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_rrelu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_rrelu_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_scaled_dot_product_attention_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_selu_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_selu_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_smooth_l1_loss_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softmin_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softmin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softplus_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_softplus_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_tanhshrink_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_tanhshrink_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_tanhshrink_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_threshold_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_threshold_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_triplet_margin_loss_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_triplet_margin_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_triplet_margin_with_distance_loss_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nn_functional_upsample_bilinear_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_static_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_nonzero_static_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_norm_fro_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_norm_fro_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_norm_inf_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_norm_inf_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_normal_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_normal_number_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ones_like_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ormqr_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_outer_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_outer_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_outer_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_pca_lowrank_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_pca_lowrank_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_permute_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_permute_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_pinverse_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_pinverse_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_pinverse_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_0_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_3_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_3_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_polygamma_polygamma_n_4_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_positive_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_positive_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_pow_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_pow_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_pow_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_prod_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_put_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_put_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_put_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_qr_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_qr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_quantile_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rad2deg_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rad2deg_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rand_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randint_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randint_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randint_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randint_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randint_like_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randn_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_randn_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ravel_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ravel_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_ravel_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_real_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_real_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_real_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_real_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_real_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reciprocal_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reciprocal_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reciprocal_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_remainder_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_repeat_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_repeat_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_repeat_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_repeat_interleave_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reshape_as_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reshape_as_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reshape_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reshape_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_reshape_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resize__cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resize__cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resize_as__cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resize_as__cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resize_as__cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resolve_conj_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resolve_conj_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resolve_neg_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resolve_neg_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_resolve_neg_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_roll_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_roll_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_roll_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rot90_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rot90_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rot90_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rot90_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_round_decimals_0_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_round_decimals_3_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsqrt_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsqrt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsqrt_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsub_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsub_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_rsub_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scalar_tensor_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scalar_tensor_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_add_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_amax_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_amin_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_mean_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_mean_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_mean_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_sum_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_sum_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_sum_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_scatter_reduce_sum_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_select_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_select_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_select_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_select_scatter_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sgn_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sgn_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_short_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_short_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_short_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sigmoid_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sigmoid_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sign_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_exponential_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_exponential_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_gaussian_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_general_hamming_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_hamming_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signal_windows_nuttall_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signbit_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signbit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_signbit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sin_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sin_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sin_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sinc_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sinc_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sinh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sinh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sinh_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_slice_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_slice_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_slice_scatter_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_slice_scatter_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_softmax_with_dtype_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_softmax_with_dtype_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sort_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sort_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sort_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sort_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sort_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sparse_mm_reduce_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sparse_sampled_addmm_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sparse_sampled_addmm_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_airy_ai_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_airy_ai_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_airy_ai_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_j0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_j0_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_j0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_j1_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_j1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_y0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_y0_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_bessel_y1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_t_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_t_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_u_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_u_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_u_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_v_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_v_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_w_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_chebyshev_polynomial_w_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_entr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_entr_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_erfcx_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_erfcx_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_hermite_polynomial_he_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_hermite_polynomial_he_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_hermite_polynomial_he_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_hermite_polynomial_he_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i0e_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i0e_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i0e_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i1e_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_i1e_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_laguerre_polynomial_l_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_legendre_polynomial_p_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_legendre_polynomial_p_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_log_ndtr_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_i1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_i1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_i1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_k0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_k1_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_k1_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_k1_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_k1_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_modified_bessel_k1_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_ndtr_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_ndtr_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_ndtri_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_polygamma_special_polygamma_n_0_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_scaled_modified_bessel_k0_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_scaled_modified_bessel_k1_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_shifted_chebyshev_polynomial_t_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_shifted_chebyshev_polynomial_t_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_shifted_chebyshev_polynomial_t_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_shifted_chebyshev_polynomial_u_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_shifted_chebyshev_polynomial_v_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_spherical_bessel_j0_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_xlog1py_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_xlog1py_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_special_zeta_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_list_args_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_list_args_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_list_args_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_list_args_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_list_args_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_with_sizes_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_with_sizes_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_with_sizes_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_split_with_sizes_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sqrt_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sqrt_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_square_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_square_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_square_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_multiple_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_multiple_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_squeeze_multiple_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_stack_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_stack_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_stack_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_stack_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_std_mean_unbiased_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_std_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_stft_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_stft_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sub_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sub_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sum_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sum_to_size_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sum_to_size_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_sum_to_size_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_svd_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_svd_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_svd_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_t_copy_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_t_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_take_along_dim_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_take_along_dim_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_take_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_take_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tan_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tan_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tan_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tan_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tan_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tanh_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tanh_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tanh_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tensor_split_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tensor_split_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tile_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tile_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_sparse_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_sparse_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_sparse_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_to_sparse_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_topk_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_torch__scaled_mm_cuda_float8_e4m3fn, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trace_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trace_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trace_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_transpose_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_transpose_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trapezoid_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trapz_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_trapz_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_triangular_solve_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tril_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tril_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_tril_indices_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_triu_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_triu_indices_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_triu_indices_cuda_int64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_true_divide_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unbind_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unflatten_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unflatten_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unflatten_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unflatten_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unfold_copy_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unfold_copy_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unfold_copy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unfold_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unfold_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_uniform_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unique_consecutive_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unique_consecutive_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_chunk_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_chunk_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_chunk_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_chunk_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_split_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_split_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsafe_split_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsqueeze_copy_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsqueeze_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsqueeze_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_unsqueeze_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_var_mean_unbiased_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_var_mean_unbiased_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_var_unbiased_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vdot_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_as_complex_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_copy_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_cuda_bool, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_view_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vsplit_cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vsplit_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vsplit_cuda_float32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vsplit_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vsplit_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vsplit_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vstack_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vstack_cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_vstack_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_where_cuda_complex32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_where_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_where_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_xlogy_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_xlogy_cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zero__cuda_bfloat16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zero__cuda_float64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zero__cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zero__cuda_uint8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_cuda_int16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_cuda_int8, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_like_cuda_complex128, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_like_cuda_complex64, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_like_cuda_float16, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_like_cuda_int32, test/test_meta.py::TestMetaCUDA::test_meta_outplace_zeros_like_cuda_int64, test/test_meta.py::TestMetaCUDA::test_nan_to_num_cuda 2024-08-07T19:08:59.8503834Z 2024-08-07T19:09:03.0415604Z Running dynamo/test_skip_non_tensor 1/1 ... [2024-08-07 19:09:03.041062] 2024-08-07T19:09:03.0419544Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'dynamo/test_skip_non_tensor.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 19:09:03.041521] 2024-08-07T19:09:07.6151505Z 2024-08-07T19:09:07.6155058Z dynamo/test_skip_non_tensor 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_skip_non_tensor_1.1_2e14e453c00ee288_.log 2024-08-07T19:09:07.6159061Z Running 8 items in this shard: test/dynamo/test_skip_non_tensor.py::SkipNonTensorTests::test_add_skip, test/dynamo/test_skip_non_tensor.py::SkipNonTensorTests::test_add_tensor1, test/dynamo/test_skip_non_tensor.py::SkipNonTensorTests::test_add_tensor2, test/dynamo/test_skip_non_tensor.py::SkipNonTensorTests::test_add_tensor_dict, test/dynamo/test_skip_non_tensor.py::SkipNonTensorTests::test_add_tensor_list, test/dynamo/test_skip_non_tensor.py::SkipNonTensorTests::test_custom_list, test/dynamo/test_skip_non_tensor.py::SkipNonTensorTests::test_do_not_skip_side_effects, test/dynamo/test_skip_non_tensor.py::SkipNonTensorTests::test_recursive_list 2024-08-07T19:09:07.6162225Z 2024-08-07T19:09:11.4350273Z Running dynamo/test_interop 1/1 ... [2024-08-07 19:09:11.434514] 2024-08-07T19:09:11.4354472Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'dynamo/test_interop.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 19:09:11.435023] 2024-08-07T19:09:15.9585141Z 2024-08-07T19:09:15.9586357Z dynamo/test_interop 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_interop_1.1_3a0275a630e9b103_.log 2024-08-07T19:09:15.9590175Z Running 4 items in this shard: test/dynamo/test_interop.py::InteropTests::test_fx_fn, test/dynamo/test_interop.py::InteropTests::test_script_fn, test/dynamo/test_interop.py::InteropTests::test_trace_fn, test/dynamo/test_interop.py::InteropTests::test_vmap_in_graph 2024-08-07T19:09:15.9591989Z 2024-08-07T19:09:19.7556826Z Running inductor/test_extension_backend 1/1 ... [2024-08-07 19:09:19.755138] 2024-08-07T19:09:19.7561168Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_extension_backend.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 19:09:19.755681] 2024-08-07T19:09:52.3361741Z 2024-08-07T19:09:52.3365336Z inductor/test_extension_backend 1/1 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_extension_backend_1.1_3016a201a34a0504_.log 2024-08-07T19:09:52.3366786Z Running 1 items in this shard: test/inductor/test_extension_backend.py::ExtensionBackendTests::test_open_device_registration 2024-08-07T19:09:52.3367495Z 2024-08-07T19:09:56.2049669Z Running inductor/test_compiled_optimizers 1/1 ... [2024-08-07 19:09:56.204445] 2024-08-07T19:09:56.2054787Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_compiled_optimizers.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 19:09:56.205028] 2024-08-07T19:15:09.7017595Z 2024-08-07T19:15:09.7019094Z test_ops_jit 3/3 was successful, full logs can be found in artifacts with path test/test-reports/test_ops_jit_3.3_b9d3a7b04fcbc5d1_.log 2024-08-07T19:15:09.7333642Z Running 373 items in this shard: test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_atan2_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_atan_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_erfinv_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_exp2_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_ge_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_i0_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_igamma_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_lgamma_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_linalg_det_singular_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_linalg_inv_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_log1p_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_log_softmax_with_dtype_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_lt_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_mH_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_min_binary_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_movedim_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_ne_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_nn_functional_conv2d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_nn_functional_conv_transpose1d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_nn_functional_conv_transpose2d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_nn_functional_rms_norm_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_round_decimals_0_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_round_decimals_neg_3_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_sub_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_jit_alias_remapping_trunc_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_T_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit___radd___cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit___rdiv___cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit___rdiv___cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit___rmod___cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit___rmul___cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit___rmul___cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit___rsub___cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit__segment_reduce_offsets_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit__unsafe_masked_index_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit__unsafe_masked_index_put_accumulate_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_acos_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_acosh_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_acosh_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_addbmm_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_addcdiv_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_addcmul_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_addmm_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_addmm_decomposed_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_angle_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_any_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_argsort_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_argwhere_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_as_strided_partial_views_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_asin_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_atan2_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_atanh_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_atanh_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_baddbmm_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_baddbmm_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_bfloat16_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_block_diag_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_bool_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_broadcast_shapes_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_broadcast_to_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_cartesian_prod_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_cartesian_prod_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_cat_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_cat_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_cdist_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_cfloat_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_cfloat_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_chalf_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_clamp_min_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_clone_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_column_stack_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_combinations_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_conj_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_conj_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_constant_pad_nd_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_contiguous_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_cosh_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_count_nonzero_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_cumprod_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_cumulative_trapezoid_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_deg2rad_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_diagonal_copy_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_diagonal_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_diagonal_scatter_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_div_no_rounding_mode_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_dsplit_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_dstack_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_dstack_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_einsum_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_einsum_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_eq_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_erfc_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_exp2_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_exp_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_expand_copy_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_expm1_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_eye_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_fft2_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_fftn_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_hfft_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_ifft2_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_ifft2_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_ifft_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_ifftn_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_ifftshift_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_ihfftn_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_irfftn_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_rfft2_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fft_rfft_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fill_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_flatten_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fliplr_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_flipud_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_float_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_float_power_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_fmod_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_frac_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_full_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_full_like_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_gather_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_gradient_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_hsplit_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_hstack_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_igamma_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_igammac_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_imag_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_index_fill_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_index_reduce_amin_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_int_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_isclose_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_isclose_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_isinf_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_isnan_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_isnan_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_isneginf_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_istft_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_item_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_jiterator_4inputs_with_extra_args_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_jiterator_binary_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_jiterator_unary_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_kthvalue_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_ldexp_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_lgamma_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_cond_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_det_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_det_singular_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_diagonal_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_eigvals_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_eigvalsh_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_eigvalsh_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_householder_product_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_inv_ex_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lstsq_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_ex_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_lu_factor_ex_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_matrix_power_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_matrix_power_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_matrix_rank_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_multi_dot_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_norm_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_norm_subgradients_at_zero_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_pinv_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_pinv_singular_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_pinv_singular_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_qr_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_slogdet_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_solve_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_solve_ex_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_svd_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_svd_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_svdvals_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_svdvals_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_tensorinv_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_tensorsolve_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_vander_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linalg_vander_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_linspace_tensor_overload_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_log_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_log_normal_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_log_softmax_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_logaddexp2_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_logcumsumexp_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_logdet_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_logical_and_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_logical_and_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_logical_xor_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_logical_xor_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_logit_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_logspace_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_logspace_tensor_overload_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_lu_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_lu_unpack_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_lu_unpack_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_mT_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_amax_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_amin_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_cumsum_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_mean_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_norm_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_normalize_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_select_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_select_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_sum_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_var_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_masked_var_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_matmul_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_matrix_exp_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_max_binary_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_mean_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_median_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_meshgrid_list_of_tensors_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_min_reduction_with_dim_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_mode_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_movedim_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_mul_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_multinomial_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_mvlgamma_mvlgamma_p_1_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_mvlgamma_mvlgamma_p_3_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_mvlgamma_mvlgamma_p_5_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nan_to_num_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nanmean_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nanquantile_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nansum_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_narrow_copy_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_narrow_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_narrow_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_native_dropout_backward_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_ne_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_neg_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_new_empty_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_new_empty_strided_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_new_empty_strided_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_new_ones_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_new_zeros_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nextafter_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_adaptive_avg_pool2d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_adaptive_avg_pool3d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_avg_pool1d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_avg_pool2d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_batch_norm_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_conv3d_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_conv_transpose2d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_feature_alpha_dropout_with_train_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_feature_alpha_dropout_without_train_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_fractional_max_pool2d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_fractional_max_pool3d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_gelu_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_glu_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_grid_sample_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_hardtanh_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_hinge_embedding_loss_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_instance_norm_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_interpolate_area_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_interpolate_bicubic_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_interpolate_bilinear_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_interpolate_nearest-exact_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_interpolate_trilinear_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_kl_div_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_l1_loss_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_leaky_relu_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_logsigmoid_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_max_pool1d_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_max_unpool1d_grad_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_mse_loss_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_multi_head_attention_forward_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_pad_circular_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_pad_constant_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_pad_constant_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_pad_replicate_negative_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_pairwise_distance_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_pixel_shuffle_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_pixel_shuffle_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_poisson_nll_loss_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_relu6_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_rrelu_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_silu_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_softplus_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_softshrink_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_softsign_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_triplet_margin_loss_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_unfold_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nn_functional_unfold_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nonzero_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_nonzero_static_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_norm_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_norm_fro_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_norm_inf_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_ormqr_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_outer_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_pca_lowrank_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_pinverse_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_polar_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_polygamma_polygamma_n_0_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_polygamma_polygamma_n_3_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_positive_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_pow_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_prod_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_put_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_qr_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_qr_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_quantile_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_randn_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_reciprocal_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_repeat_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_reshape_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_reshape_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_resize_as__cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_resolve_neg_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_roll_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_scatter_add_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_scatter_add_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_scatter_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_select_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_sgn_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_sigmoid_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_signal_windows_bartlett_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_signal_windows_blackman_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_signal_windows_general_hamming_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_signal_windows_hann_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_signal_windows_kaiser_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_sinc_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_bessel_j1_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_chebyshev_polynomial_w_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_entr_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_erfcx_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_hermite_polynomial_h_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_i1e_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_log_ndtr_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_modified_bessel_i0_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_modified_bessel_i1_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_ndtr_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_scaled_modified_bessel_k0_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_shifted_chebyshev_polynomial_t_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_special_shifted_chebyshev_polynomial_w_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_split_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_split_with_sizes_copy_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_squeeze_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_squeeze_multiple_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_stack_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_stack_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_std_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_std_mean_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_sum_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_sum_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_sum_to_size_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_svd_lowrank_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_t_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_take_along_dim_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_take_along_dim_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_take_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_take_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_tan_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_tanh_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_tensor_split_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_tensordot_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_trace_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_trace_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_transpose_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_trapezoid_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_trapz_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_tril_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_trunc_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_unfold_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_unsqueeze_copy_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_unsqueeze_copy_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_var_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_vdot_cuda_complex64, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_vdot_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_vsplit_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_where_cuda_float32, test/test_ops_jit.py::TestJitCUDA::test_variant_consistency_jit_zeros_like_cuda_float32 2024-08-07T19:15:09.7603382Z 2024-08-07T19:15:13.6101453Z Running export/test_tools 1/1 ... [2024-08-07 19:15:13.609615] 2024-08-07T19:15:13.6105922Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'export/test_tools.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 19:15:13.610129] 2024-08-07T19:15:18.3339747Z 2024-08-07T19:15:18.3341698Z export/test_tools 1/1 was successful, full logs can be found in artifacts with path test/test-reports/export.test_tools_1.1_6574ec9db8e5b0e2_.log 2024-08-07T19:15:18.3344942Z Running 2 items in this shard: test/export/test_tools.py::TestExportTools::test_report_exportability_basic, test/export/test_tools.py::TestExportTools::test_report_exportability_with_issues 2024-08-07T19:15:18.3346936Z 2024-08-07T19:15:22.1418980Z Running dynamo/test_inline_inbuilt_nn_modules 1/1 ... [2024-08-07 19:15:22.141382] 2024-08-07T19:15:22.1426086Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'dynamo/test_inline_inbuilt_nn_modules.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 19:15:22.142149] 2024-08-07T19:15:38.3004877Z 2024-08-07T19:15:38.3006590Z inductor/test_compiled_optimizers 1/1 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_compiled_optimizers_1.1_6c23bac92bd7cf38_.log 2024-08-07T19:15:38.3343790Z Running 556 items in this shard: test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_recompile, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_rho_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_rho_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_rho_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_tensor_lr_capturable_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_weight_decay_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_weight_decay_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_weight_decay_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_weight_decay_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adadelta_weight_decay_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_initial_accumulator_value_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_initial_accumulator_value_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_initial_accumulator_value_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_lr_decay_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_lr_decay_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_lr_decay_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_recompile, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cpu_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_tensor_lr_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_weight_decay_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_weight_decay_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adagrad_weight_decay_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_recompile, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_tensor_lr_amsgrad_capturable_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_amsgrad_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_amsgrad_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_amsgrad_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_amsgrad_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_amsgrad_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adam_weight_decay_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_recompile, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_tensor_lr_weight_decay_capturable_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_weight_decay_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_weight_decay_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_weight_decay_maximize_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_weight_decay_maximize_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_weight_decay_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_weight_decay_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamax_weight_decay_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_recompile, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_tensor_lr_amsgrad_capturable_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_amsgrad_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_amsgrad_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_amsgrad_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_amsgrad_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_amsgrad_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_adamw_weight_decay_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_lambd_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_lambd_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_lambd_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_maximize_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_maximize_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_recompile_default, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_recompile_foreach, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_recompile_single, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_t0_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_t0_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_t0_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_tensor_lr_weight_decay_maximize_capturable_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_weight_decay_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_weight_decay_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_weight_decay_maximize_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_weight_decay_maximize_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_weight_decay_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_weight_decay_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_asgd_weight_decay_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_basic_shampoo, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_closure_graph_break, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_compile_time_smoketest, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_get_value_on_static_address, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_guard_on_none_grads, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_momentum_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_momentum_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_momentum_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_recompile, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_tensor_lr_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_momentum_decay_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_momentum_decay_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_momentum_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_momentum_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_momentum_decay_decoupled_weight_decay_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_momentum_decay_decoupled_weight_decay_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_momentum_decay_decoupled_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_momentum_decay_decoupled_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_momentum_decay_decoupled_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_nadam_weight_decay_momentum_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_capturable_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_capturable_weight_decay_decoupled_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_capturable_weight_decay_decoupled_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_capturable_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_eps_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_eps_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_eps_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_tensor_lr_capturable_weight_decay_decoupled_weight_decay_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_weight_decay_decoupled_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_weight_decay_decoupled_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_weight_decay_decoupled_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_weight_decay_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_weight_decay_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_radam_weight_decay_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_recompile, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_tensor_lr_capturable_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_centered_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_centered_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_centered_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_centered_momentum_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_centered_momentum_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_centered_momentum_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_centered_momentum_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_centered_momentum_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_centered_momentum_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_maximize_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rmsprop_weight_decay_maximize_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_capturable_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_capturable_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_etas_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_etas_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_etas_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_recompile, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_step_sizes_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_step_sizes_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_step_sizes_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_rprop_tensor_lr_capturable_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_dampening_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_dampening_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_dampening_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_nesterov_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_nesterov_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_nesterov_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_weight_decay_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_weight_decay_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_momentum_weight_decay_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_recompile_foreach, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_recompile_single, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cpu_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_constantlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_cosineannealinglr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_cosineannealingwarmrestarts, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_cycliclr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_exponentiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_lambdalr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_linearlr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_multiplicativelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_multisteplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_onecyclelr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_polynomiallr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_reducelronplateau, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_tensor_lr_foreach_cuda_steplr, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_weight_decay_maximize_cpu, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_weight_decay_maximize_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_sgd_weight_decay_maximize_foreach_cuda, test/inductor/test_compiled_optimizers.py::CompiledOptimizerTests::test_static_address_finalizer, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_ASGD_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_ASGD_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Adadelta_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Adadelta_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Adafactor_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Adafactor_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Adagrad_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Adagrad_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_AdamW_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_AdamW_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Adam_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Adam_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Adamax_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Adamax_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_LBFGS_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_LBFGS_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_NAdam_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_NAdam_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_RAdam_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_RAdam_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_RMSprop_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_RMSprop_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Rprop_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_Rprop_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_SGD_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_SGD_use_closure_True_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_SparseAdam_use_closure_False_cuda_float32, test/inductor/test_compiled_optimizers.py::CompiledOptimizerParityTestsCUDA::test_correctness_SparseAdam_use_closure_True_cuda_float32 2024-08-07T19:15:38.3670003Z 2024-08-07T19:15:42.2268636Z Running inductor/test_move_constructors_to_cuda 1/1 ... [2024-08-07 19:15:42.226247] 2024-08-07T19:15:42.2271365Z Executing ['/opt/conda/envs/py_3.10/bin/python', '-bb', 'inductor/test_move_constructors_to_cuda.py', '-m', 'not serial', '--shard-id=1', '--num-shards=1', '-v', '-vv', '-rfEX', '-p', 'no:xdist', '--use-pytest', '-x', '--reruns=2', '--import-slow-tests', '--import-disabled-tests'] ... [2024-08-07 19:15:42.226707] 2024-08-07T19:15:46.2118649Z 2024-08-07T19:15:46.2120418Z inductor/test_move_constructors_to_cuda 1/1 was successful, full logs can be found in artifacts with path test/test-reports/inductor.test_move_constructors_to_cuda_1.1_131488603c74d17c_.log 2024-08-07T19:15:46.2121380Z 2024-08-07T19:18:31.0446451Z 2024-08-07T19:18:31.0448130Z dynamo/test_inline_inbuilt_nn_modules 1/1 was successful, full logs can be found in artifacts with path test/test-reports/dynamo.test_inline_inbuilt_nn_modules_1.1_ffc2ed4ef395d7da_.log 2024-08-07T19:18:31.1219546Z Running 1102 items in this shard: test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_312_binary_slice_with_graph_break1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_312_binary_slice_with_graph_break2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_T_tensor_attribute_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_add_sizes_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_add_to_set_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_anomaly_aot_autograd_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_any_all_symnode_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_aot_autograd_propagate_unbacked_symints_shape_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_assert_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_assert_size_stride_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_assigning_function_to_class_attribute_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_assigning_function_to_object_attribute_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_auto_functionalize_can_with_default_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_auto_functionalize_can_with_none_return_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_auto_functionalize_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_auto_functionalize_on_view_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_auto_functionalize_optional_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_auto_functionalize_self_as_mutate_arg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_auto_functionalize_tensorlist_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_auto_functionalize_with_returns_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_backend_match_guard_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_backend_match_guard_multi_threads_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_backward_deterministic_mode_mismatch_warning_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_boolarg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_build_tuple_unpack_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_builder_for_class_with_metaclass_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_builtin_abs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_builtin_isinstance_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_builtin_str_on_user_defined_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_builtin_subclasses_as_method_on_class_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_builtin_subclasses_as_method_on_var_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_call_parent_non_class_methods_from_child_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_callpacked_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_can_auto_functionalize_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cannot_trace_mark_dynamic_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cannot_trace_mark_dynamic_safe_unreached_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cast_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cat_unbacked_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_catch_watchings1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_catch_watchings2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cell_output1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cell_output2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_class_duner_mro_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_class_has_instancecheck_method_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_clone_sparse_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_closure_out_of_scope_cell_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_closure_out_of_scope_cell_with_cond_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_closure_out_of_scope_cell_with_mutation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_closure_recompiles_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_closure_with_mutation_and_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_compare_shapes_eq_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_compare_shapes_neq_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_compare_shapes_tuple_eq_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_compare_shapes_tuple_neq_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_compare_shapes_with_constant_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_compilation_metrics_size_limit_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_compile_profiler_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cond_export_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cond_export_single_arg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cond_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cond_nested_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cond_side_effects_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cond_with_quantization_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_conditional_list_comp_in_context_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_config_getattr_default_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_config_obj_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_const_dict_variable_python_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_constant_getattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_contains_dunder_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cpp_extension_recommends_custom_ops_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cross_entropy_loss_fancy_ctor1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cross_entropy_loss_fancy_ctor2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cross_entropy_loss_simple_ctor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cse_dict_guards_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_cuda_set_device_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_custom_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_custom_iter_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_custom_keys_iter_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_custom_module_free_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dataclass_fields_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dataclass_local_hasattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_default_args_device_dtype_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_default_dtype_change_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_defaultdict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_deque_append_left_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_deque_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_derpy_nn_module_usage_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_descriptor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_deterministic_algorithms_mutated_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dict_guard_on_keys_order2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dict_guard_on_keys_order_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dict_mutation_side_effect_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dict_namedtuple_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dict_order_keys_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dict_order_keys_modules_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dict_order_keys_tensors_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dict_reconstruct_keeps_original_order_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dict_subclass_cannot_be_initialized_in_graph_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dictcomp_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_disable_flag_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dtypes_no_graphbreaks_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dunder_methods_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dunder_new_function_inlining_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_duplicate_graph_break_log_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dynamic_one_hot_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dynamo_cache_invalidate_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dynamo_cache_move_to_front_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dynamo_compiling_fake_tensor_to_vararg_int_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dynamo_min_operator_with_shape_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_dynamo_reset_clears_cache_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_empty_list_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_enum_as_dict_key_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_enum_as_dict_key_with_overloaded_str_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_enum_guards_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_enum_no_graphbreaks_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_error_on_nested_fx_trace_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_error_on_recompile_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_flat_name_to_original_fqn_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_fn_hasattr__name__1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_fn_hasattr__name__2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_fn_hasattr__name__3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_fold_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_frozen_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_frozenset_torch_func_contains_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_funcname_cache_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_function_annotation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_generate_tensor_from_list_of_numpy_primitive_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_generate_trivial_abstract_impl_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_get_attr_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_get_cache_entry_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_get_custom_tensor_attribute_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_get_device_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_get_instruction_source_311_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_getattr_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_getset_descriptor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_grad_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_grad_non_none_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_grad_none_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_grad_state_mutated_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_graph_break_compilation_metrics_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_graph_break_compilation_metrics_on_failure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_graph_break_correctly_when_passing_numpy_ndarray_to_torch_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guard_failure_fn2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guard_failure_fn_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guard_failure_fn_shape_control_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guard_failure_fn_tensor_iter_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guard_function_builder_with_cse_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guard_size_oblivious_backed_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guard_size_oblivious_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guard_sym_node_fstring_when_used_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guards_cse_pass_multiple_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guards_cse_pass_single_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_guards_strip_function_call_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_hasattr_nn_module_guard_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_hash_getitem_slice_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_id_guarded_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_id_guarded_object_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_id_of_nn_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_id_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_if_cond_nn_mod1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_if_cond_nn_mod2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_if_cond_nn_mod3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_if_cond_user_defined_object2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_if_cond_user_defined_object3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_if_cond_user_defined_object_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inference_mode_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inline_closure_not_loaded_by_parent_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inline_dict_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inline_dict_function_passed_as_arg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inline_dict_mutation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inline_func_jump_on_tensor_condition_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inline_list_mutation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inline_local_dict_clear_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inline_module_attr_dict_clear_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inline_user_defined_dict_attr_clear_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inplace_desugaring_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inplace_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inplace_param_update_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inplace_view_on_graph_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_input_set_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inspect_signature_bind_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_inspect_signature_bind_non_user_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_int_int_comparisons_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_int_list_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_int_neg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_int_shape_binops_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_int_shape_comparisons_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_int_shape_inplace_binops_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_intermediary_tensor_grad_access_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_interpolate_propagate_real_tensors_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_invalid_args_builtin_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_is_compiling_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_is_floating_point2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_is_floating_point_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_is_tensor2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_is_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_is_tensor_like2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_is_tensor_like_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_item_changes_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_item_changes_new_shape_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_item_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_iter_set_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_iter_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_accumulate_symint_default_sum_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_accumulate_tensors_builtins_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_accumulate_tensors_default_sum_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_accumulate_tensors_kwargs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_accumulate_tensors_user_defined_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_groupby_pure_python_default_identify_func_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_groupby_pure_python_key_func_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_infinite_count_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_infinite_cycle_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_infinite_repeat_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_infinite_repeat_mutation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_itertools_repeat_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_large_reduction_list_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_linear_module_free_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_list_append_return_none_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_list_hasattr1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_list_hasattr2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_list_iadd_side_effect_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_list_iadd_with_shape_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_list_iterator_contains_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_list_mul_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_list_slice_mul_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_listcomp_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_load_fast_and_clear_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_mandelbrot_numpy_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_map_side_effects_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_map_with_quantization_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_mark_dynamic_with_ranges_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_mark_static_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_matmul1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_min_max_over_iterable_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_module_complex_iter_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_module_deepcopy_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_module_dunder_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_module_not_callable_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_named_parameters_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_namedtuple1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_namedtuple2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_namedtuple3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nan_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_closure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_closure_mutation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_function_resuming_with_correct_globals_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_optimize_decorator_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_optimize_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_optimize_run_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_sequential_try_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_sequential_try_with_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_sequential_try_with_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_sequential_with_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nested_wraps_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_new_with_int_list_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nn_functional_reduction_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nn_module_getattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nn_module_getattribute_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nn_sequential_invocation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nn_sequential_invocation_reposition_indices_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_no_error_on_nested_fx_trace_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_no_guard_for_unused_sym_node_fstring_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_no_raise_guard_partial_constraint_across_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_no_raise_guard_partial_constraint_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_non_pt2_compliant_ops_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_nonzero_static_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_not_dynamic_scope_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numel_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_array_of_arrays_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_as_global_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_fallback_on_eager_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_force_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_gt_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_int_constant_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_iter_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_min_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_ndarray_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_ndarray_graph_break_with_multiple_outputs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_ndarray_works_with_builtin_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_no_raise_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_non_torch_dtype_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_random_config_to_numpy_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_readonly_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_recompilation_scalar_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_size_attr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_subdtype_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_take_along_axis_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_tolist_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_torch_operators_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_ufunc_out_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_ufunc_out_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_unique_f16_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_variable_isinstance_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_numpy_with_builtin_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_object_classmethod_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_object_setattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_object_staticmethod_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_onnx_shape_as_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_optimize_on_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_optree_graph_break_message_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_ordered_dict_alias_reconstruct_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_ordered_dict_move_to_end_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_os_environ_get_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_os_environ_set_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_out_variant_custom_op_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_out_variants_with_resizing_on_graph_inputs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_out_variants_with_resizing_on_graph_inputs_with_dynamic1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_out_variants_with_resizing_on_graph_inputs_with_dynamic_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_outside_linear_module_free_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_packaging_version_parse_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_pair_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_param_shape_binops_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_parameter_free_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_parsing_sdpa_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_patched_builtin_functions_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_pt2_compliant_ops_are_allowed_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_pt2_compliant_overload_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_pure_python_accumulate_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_py_guards_mark_dynamic_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_python_slice_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_raise_guard_full_constraint_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_raise_guard_indirect_full_constraint_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_raise_guard_partial_constraint_across_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_raise_guard_partial_constraint_no_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_raise_on_backend_error_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_raises_importerror1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_raises_importerror2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_raises_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_rand_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_range_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_range_with_shape_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_real_imag_tensor_attribute_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_recompile_message_on_parameter_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_recompile_on_global_state_change_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_reconstruct_set_across_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_recursive_inline_list_mutation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_recursive_tensor_attribute_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_release_input_memory_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_release_module_memory_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_release_scope_memory_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_repeat_interleave_graphbreaks_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_repro_graph_breaks_in__get_item_by_idx_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_restore_graphstate_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_return_dict_with_graph_break_and_update_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_return_nested_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_runtime_assert_replacement_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_sample_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_scalar_tensor_is_equivalent_to_int_list_argument_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_scalar_tensor_is_equivalent_to_symint_argument_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_scalar_tensor_is_equivalent_to_symint_list_argument_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_sequential_module_free_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_set_aliasing_recompiles_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_set_custom_tensor_attribute_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_setattr_mutation1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_setattr_mutation2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_setattr_mutation3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_and_tuple_equality_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_env_equal_constructor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_env_equal_create_symbolic_sizes_strides_storage_offset_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_env_equal_empty_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_env_equal_evaluate_expr_divisible_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_env_equal_evaluate_expr_refinement_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_env_equal_evaluate_expr_replacement_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_env_equal_runtime_assert_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_env_equal_unbacked_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_env_no_recording_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_env_recorded_function_fallback_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_int_comparisons_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_int_inplace_binops_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_shape_unpack_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_side_effects_codegen_update_mutated_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_simple_set_usage_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_size_dim_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_size_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_slice_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_source_non_input_grad_access_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_storage_return_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_str_format_assert1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_str_format_assert2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_str_format_return1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_str_format_return2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_stride_dim_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_super_after_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_super_calling_with_metaclass_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_sym_constrain_range_on_replaced_unbacked_symbol_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_symint_as_device_kwarg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_symint_as_device_kwarg_multi_gpu_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_symint_as_device_kwarg_non_strict_export_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_sys_modules_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tagging_tensors_mix_used_unused_structure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tagging_tensors_simple_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_build_list_unpack_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_ctor_list_of_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_data_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_dict1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_dict2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_dict3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_dot_grad_no_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_hasattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_interacts_with_numpy_ndarray_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_is_contiguous_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_item_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_item_no_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_iter_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_layout_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tensor_types_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tolist_0d_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tolist_1d_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tolist_float_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tolist_kd_dynamic_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tolist_kd_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tolist_scalar_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_top_package_import_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_check_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_check_is_size_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_check_symbolic_shape_rel_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_compile_ctx_on_forward_and_training_step_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_cuda_is_available_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_cudnn_is_acceptable_bad_inputs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_cudnn_is_acceptable_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_device_python_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_distributions_lazy_property_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_dtype_python_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_dynamo_codegen_pow_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_generator_set_state_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_guards_stack_frame_register_inlining_deep_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_guards_stack_frame_register_inlining_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_nn_parameter_isinstance_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_objects_as_keys_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_package_working_with_trace_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_seed_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_size_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_size_numel_dynamic_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_size_numel_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_torch_variable_hasattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_trace_ndarray_frame_2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_trace_ndarray_frame_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tracing_nested_py_tree_dicts_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tracing_nested_py_tree_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tracing_nested_py_tree_mixed_all_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tracing_nested_py_tree_tuples_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tracing_py_tree_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tracing_py_tree_tensor_subclass_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tracing_tree_map_only_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tuple_from_tuple_iter_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tuple_hasattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tuple_iadd_with_shape_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tuple_mul_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_tuple_mul_with_shape_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_type_copy_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_typing_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_typing_typevar_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_typing_union_and_optional_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_typing_variable_isinstance_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_unbacked_auto_functionalize_op_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_unbacked_symint_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_unhandled_exception_in_dynamo2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_unhandled_exception_in_dynamo_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_unpack4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_unpack5_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_unpack_tensor_shape_mismatch_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_update_locals_and_stack_uses_shared_cache_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_defined_binop_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_defined_class_name_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_defined_class_python_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_defined_iter_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_defined_setattr1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_defined_setattr2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_function_variable_supports_enum_argument_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_function_variable_supports_function_argument_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_function_variable_supports_type_abcmeta_argument_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_getattr1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_getattr2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_getattribute_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_user_property_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_usr_cls_classmethod_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_usr_cls_staticmethod_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_validate_outputs_unbacked_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_variable_access_in_exception_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_variable_tracker_recursively_contains_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_version_ci_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_with_builtin_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_write_to_closures_in_inlining_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_yield_from_in_a_loop_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_yield_from_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_yield_from_user_stop_iteration_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_yield_gen_and_from_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesMiscTests::test_yield_send_to_subgenerator_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_T_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_add__inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_add_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_addcdiv__inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_addcdiv_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_build_list_unpack_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_call_dict1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_call_dict2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_call_dict3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_call_dict4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_call_dict5_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_callable_builtin_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_callable_class_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_callable_lambda_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_callable_list_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_callable_torch_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_chunks1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_class_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_cls_eq_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_cls_hasattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_cls_is_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_compare_constant_and_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_complex_closure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_const_tuple_add1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_const_tuple_add2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_constant1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_constant2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_constant3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_constant4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_context_wrapping_nested_functions_no_closure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_cublas_allow_tf32_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_custom_dict_kwargs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_default_dict_closure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_default_dict_constr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_default_dict_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_default_dict_lambda_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_default_dict_list_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_default_dict_set_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_default_dict_tuple_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_del_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_deque_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_device_constant_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_device_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_copy_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_fromkeys_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_id_guard_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_keys_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_kwargs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_mutable_map_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_ops_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_param_keys_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_sorted_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_tuple_lazy_guard_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_update_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dict_values_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_distributed_is_available_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_distributed_is_initialized_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dtype_compare_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_dtype_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_elipsis_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_finfo_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_flat_param_same_storage_size_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_float_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_fn_with_self_set_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_fstrings1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_fstrings2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_fstrings3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_fstrings4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_fstrings5_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_fstrings6_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_funcdef_closure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_functools_partial_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_get_autocast_gpu_dtype_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_get_calculate_correct_fan_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_get_default_dtype_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_get_device_properties_tensor_device_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_get_privateuse1_name_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_globalfn_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_globalmodule_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_globalvar_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_import1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_in_not_in_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_index_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_indexed_range_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_indirect1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_indirect2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_indirect3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_inline_jit__unwrap_optional_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_inline_jit_annotations_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_inline_lru_cache_fn_with_default_args_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_inline_script_if_tracing_fn_with_default_args_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_inline_softmax_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_inline_with_default_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_inner_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_any_autocast_enabled_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_complex_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_contiguous_frame_counts_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_contiguous_memory_format_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_floating_point_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_fx_tracing_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_in_onnx_export_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_integer_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_not_null_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_quantized_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_is_sparse_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_islice_chain_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_itertools_chain_from_iterable_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_itertools_chain_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_itertools_combinations_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_itertools_product_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_jit_annotate_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_len_constant_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_len_constant_list_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_len_constant_misc_iterables_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_len_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_add_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_add_then_mutate_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_clear_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_compare_polyfill_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_convert_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_expand_lhs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_index_with_constant_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_reversed_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_slice_assignment_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_sorted1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_sorted2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_list_truth_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_listarg1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_listarg2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_listarg3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_listarg4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_listarg5_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_load_global_bool_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_mT_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_manual_seed_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_map_sum_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_math_radians_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_mean_sum_np_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_methodcall1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_methodcall2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_methodcall3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_min_max_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_module_constant_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_namedtuple_defaults_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_namedtuple_hasattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_namedtuple_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_namedtuple_user_methods_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_ndarray_builtin_functions_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_ndarray_method_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_ndarray_methods_returning_scalar_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_ndarray_reshape_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_ndarray_transpose_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_ndim_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_no_recompile_inner_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_no_recompile_inner_lambda_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_non_inlined_closure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_not_list_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_np_constant_collections_as_input_int_or_float_float_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_np_constant_collections_as_input_int_or_float_int_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_np_constant_collections_guards_float_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_np_constant_collections_guards_int_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_np_finfo_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_np_iinfo_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_number_method_method_as_integer_ratio_num_type0_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_number_method_method_as_integer_ratio_num_type3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_number_method_method_bit_length_num_type1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_number_method_method_conjugate_num_type2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_number_method_method_conjugate_num_type4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_number_method_method_hex_num_type5_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_number_method_method_is_integer_num_type6_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_numpy_attributes_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_numpy_dtype_argument_to_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_numpy_dtype_call_in_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_numpy_fft_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_numpy_linalg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_numpy_meshgrid_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_numpy_random_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_numpy_size_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_obj_eq_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_obj_is_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_ordered_dict_kwargs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partial_across_graph_break_uninvoked_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_as_input_UDF_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_as_input_partials_lambda_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_as_input_partials_mod_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_graph_break_reconstruct_args_and_kwargs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_graph_break_reconstruct_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_graph_break_reconstruct_mix_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_graph_break_reconstruct_mix_no_source_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___annotations___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___builtins___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___call___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___class___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___closure___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___code___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___defaults___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___delattr___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___dict___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___dir___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___doc___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___eq___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___format___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___ge___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___get___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___getattribute___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___globals___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___gt___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___hash___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___init___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___init_subclass___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___kwdefaults___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___le___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___lt___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___module___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___name___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___ne___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___new___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___qualname___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___reduce___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___reduce_ex___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___repr___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___setattr___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___sizeof___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___str___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr___subclasshook___inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr_args_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr_func_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_attr_keywords_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_hasattr_set_attr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_lambda_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_recompilation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_torch_op_arg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_torch_op_kwarg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_udf_arg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_udf_kwarg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_udf_kwarg_method_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_partials_udf_kwarg_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_pop_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_pos_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_pow_int_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_promote_types_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_rand_inlined_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_rand_tensor_partial_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_range1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_range2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_range_length_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_range_with_index_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_range_with_slice_index_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_reduce_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_reduce_with_initial_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_reduce_with_none_initial_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_reduce_with_single_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_reduce_with_single_with_initial_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_return_dict2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_return_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_return_multiple_numpy_ndarray_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_return_numpy_ndarray_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_return_tuple1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_return_tuple2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_set_contains_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_set_difference_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_set_intersection_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_set_isdisjoint_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_set_keys_view_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_set_union_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_set_update_bytecode_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_set_update_list_with_duplicated_items_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_shape1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_shape2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_slice1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_slice2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_slice3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_slice4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_slice5_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_slice6_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_sliced_range_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_startswith_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_sum_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_sum_shortcut_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_sum_shortcut_with_start_arg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_sum_shortcut_with_start_kwarg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_sum_with_start_arg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_sum_with_start_kwarg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_symbool_to_int_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_element_size_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_is_complex_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_len_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_new_with_shape_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_new_with_size_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_size_indexed_by_symint_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_type2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_type3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_type4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_type5_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tensor_type_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_to_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_torch_distributions_functions_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_torch_from_numpy_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_torch_size_hasattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_transpose_for_scores_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_truth_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tuple1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tuple2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tuple_contains_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tuple_iadd_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_tuple_sorted_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_unary_fold_op_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_unary_fold_op_seq_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_unpack1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_unpack2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_unpack3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_unpack_ex1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_unpack_ex2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_unpack_ex3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_unpack_mutable_map_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_viamethod_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_viatorch_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFunctionTests::test_zip_longest_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_access_by_keys_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_basicmodule1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_basicmodule2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_call_fn_with_non_const_inputs_safe_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_cfgmod_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_children_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_constloop_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_conv_call_forward_directly_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_conv_call_super_forward_directly_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_conv_transpose_call_forward_directly_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_conv_transpose_call_super_forward_directly_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_densenet_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_enumvalues_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_fnmember_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_fnmembercmp1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_fnmembercmp2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_forward_directly_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_generation_tag_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_hasattr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_intarg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_iseval1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_iseval2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_isnonelayer_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_istraining1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_istraining2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_layerlist_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_lazy_module1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_lazy_module2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_lazy_module3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_lazy_module4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_lazy_module5_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_lazy_module6_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_lazy_module7_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_lazy_module_kwargs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_lazy_module_no_cls_to_become_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_module_attribute_precedence_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_module_call_module_with_static_forward_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_module_class_method_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_module_comparison_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_module_forward_has_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_module_guard_name_is_valid_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_module_name_string_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_module_property_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_module_static_method_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_moduledict_custom_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_moduledict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_modulelist_custom_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_modulelist_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_modulelist_nested_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_modulemethod1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_modulemethod2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_named_children_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_nn_moduledict_contains_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_parameterdict_custom_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_parameterdict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_parameters1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_parameters2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_parameters3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_parameters4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_parameters5_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_self_mutating1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_seq_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_sequential_with_duplicated_module2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_sequential_with_duplicated_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_simple_torch_function_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_stringmember_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_submodules1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_submodules2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_super1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_super2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_super_class_method_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_tensorlist_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_torch_function_with_closure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_unsupportedmethod_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_unsupportedmodule_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesNNModuleTests::test_viamodulecall_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_access_module_attr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_constants_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_global_num_adds_guard_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_global_num_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_input_num_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_numpy_number_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_tracked_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_tracked_nested_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_untracked_global_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_untracked_global_nested_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_untracked_nonlocal_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_capture_value_created_in_subgraph_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_cond_branches_no_arguments_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_cond_branches_no_arguments_no_closure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_cond_free_variable_in_both_branches_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_cond_graph_break_in_one_branch_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_cond_pytree_operands_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_cond_pytree_operands_with_non_tensor_leaves_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_cond_side_effect_in_one_branches_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_cond_source_fn_stack_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_cond_subgraph_name_is_valid_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_cond_with_constant_pred_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_enum_arg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_error_message_sane_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_fallback_on_graph_break_complicated_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_fallback_on_graph_break_simple_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_fallback_on_python_primitives_output_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_flat_list_output_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_fn_with_kwargs_in_torch_ops_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_freevars_as_inputs_to_wrap_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_grad_source_fn_stack_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_hooks_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_inlined_functions_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_internal_nonlocal_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_lift_tensor_constant_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_make_closure_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_map_example_value_metadata_consistent_with_eager_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_map_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_map_kwargs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_map_lowers_to_graph_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_map_multi_return_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_map_pytree_return_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_map_side_effect_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_map_source_fn_stack_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_map_subgraph_name_is_valid_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_map_symint_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_modules_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_nested_tuple_output_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_nested_wrap_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_no_freevars_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_output_with_dict_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_register_mode_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_register_subclass_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_return_captured_var_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_return_captured_var_used_multiple_times_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_return_captured_vars_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_same_freevar_twice_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_del_existing_attr_global_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_del_existing_attr_global_obj_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_del_existing_attr_nonlocal_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_del_existing_attr_nonlocal_obj_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_in_body_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_local_list_append_no_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_mutate_global_list_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_mutate_global_num_builtin_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_mutate_global_num_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_mutate_global_tensor_builtin_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_mutate_global_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_mutate_nonlocal_num_builtin_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_mutate_nonlocal_num_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_mutate_nonlocal_tensor_builtin_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_mutate_nonlocal_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_nested_nonlocal_list_append_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_nonlocal_list_append_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_set_existing_attr_global_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_set_existing_attr_global_obj_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_set_existing_attr_nonlocal_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_set_existing_attr_nonlocal_obj_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_set_new_attr_global_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_set_new_attr_global_obj_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_set_new_attr_nonlocal_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_side_effect_set_new_attr_nonlocal_obj_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_symint_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_vmap_multiply_scalar_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_vmap_source_fn_stack_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_all_kwarg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_allow_local_assign_in_body_fn_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_kwarg_default_else_branch_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_kwarg_default_if_branch_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_kwarg_default_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_kwarg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_kwarg_int_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_kwarg_only_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_kwarg_recompile_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_pytree_args_nested_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_pytree_args_not_const_symint_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_pytree_args_with_symint_constant_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_pytree_kwargs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_source_fn_stack_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesHigherOrderOpTests::test_wrap_subgraph_name_is_valid_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_functional_call_disable_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_functional_call_disable_inline_nn_module_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_functional_call_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_functional_call_sequential_params_and_buffers_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_capture_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_closure_scalar_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_disable_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_fn_with_kwargs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_freevar_python_scalar_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_freevar_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_has_aux_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_non_tensor_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_over_grad_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_pytree_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_recompile_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_two_tensor_all_grad_has_aux_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_two_tensor_has_aux_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_with_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_grad_with_side_effect_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_hessian_argnums_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_hessian_disable_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_hessian_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jacfwd_disable_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jacfwd_has_aux_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jacfwd_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jacfwd_randomness_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jacfwd_two_tensors_argnums_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jacrev_disable_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jacrev_has_aux_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jacrev_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jacrev_two_tensors_argnums_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jvp_disable_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jvp_freevar_python_scalar_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jvp_freevar_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jvp_has_aux_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jvp_jvp_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jvp_simple_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jvp_two_tensors_disable_enable_disable_grad_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jvp_two_tensors_disable_grad_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_jvp_two_tensors_has_aux_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_linearize_disable_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_linearize_jvp_fn_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vjp_disable_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vjp_has_aux_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vjp_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vjp_multiple_outputs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vjp_multiple_outputs_python_struct_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_disable_capture_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_free_const_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_free_tensor_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_get_wrapped_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_kwargs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_multiple_invocation_in_dims_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_multiple_invocation_out_dims_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_multiple_outputs_diff_dims_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_multiple_outputs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_multiple_outputs_out_dims_tuple_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_new_tensor_implicit_via_op_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_new_tensor_in_body_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_new_tensor_unused_in_body_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_over_vmap_captured_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_over_vmap_two_inputs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_previous_illegal_op_no_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_pytree_inputs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_recompile_different_config_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_recompile_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_recompile_same_config_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_recompile_with_randomness_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_side_effects_append_input_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_side_effects_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_two_inputs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_two_inputs_tuple_in_dims_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_with_conditional_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_with_graph_break_2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_with_graph_break_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesFuncTorchHigherOrderOpTests::test_vmap_with_graph_break_lambda_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_LSTM_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_alias_inputs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_aot_autograd_expand_mutation_backwards_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_aot_autograd_expand_mutation_error_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_aot_autograd_expand_mutation_functionalizes_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_aot_autograd_raises_invalid_leaf_set_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_aot_export_joint_simple_repro_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_aot_grad_mode_mutation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_aot_sequence_nr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_arg_dupe_via_dynamo_recompiles_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_arg_dupe_via_dynamo_recompiles_many_args_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_arg_dupe_via_dynamo_recompiles_many_args_param_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_arg_dupe_via_dynamo_recompiles_many_args_param_non_tensor_arg_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_arg_dupe_via_dynamo_recompiles_many_args_param_non_tensor_arg_list_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_arg_dupe_via_dynamo_recompiles_many_with_global_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_call_fn_with_non_const_inputs_aot_safe_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_call_fn_with_non_const_inputs_aot_unsafe_control_flow_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_call_fn_with_non_const_inputs_aot_unsafe_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_data_ptr_access_copy_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_data_ptr_access_fails_in_backward_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_data_ptr_access_fails_in_forward_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_donated_buffer1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_donated_buffer2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_donated_buffer3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_donated_buffer4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_donated_buffer5_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_donated_buffer_with_retain_or_create_graph1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_donated_buffer_with_retain_or_create_graph2_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_donated_buffer_with_retain_or_create_graph3_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_donated_buffer_with_retain_or_create_graph4_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_double_backward_errors_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_eager_sequence_nr_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_grad_inputs_alias_inputs_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_multiple_aot_autograd_calls_dupe_args_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_mutation1_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_mutation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_negative_testing_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_negative_testing_mutation_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_nn_parameter_construction_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_requires_grad_fake_via_dynamo_recompiles_inline_inbuilt_nn_modules, test/dynamo/test_inline_inbuilt_nn_modules.py::InlineInbuiltNNModulesAotAutogradFallbackTests::test_split_with_sizes_aot_autograd_cleans_up_traceback_meta_inline_inbuilt_nn_modules 2024-08-07T19:18:31.1961512Z 2024-08-07T19:18:32.1131953Z 2024-08-07T19:18:32.1132584Z real 78m19.766s 2024-08-07T19:18:32.1132928Z user 147m13.161s 2024-08-07T19:18:32.1133241Z sys 7m58.935s 2024-08-07T19:18:32.1133550Z + assert_git_not_dirty 2024-08-07T19:18:32.1133942Z + [[ linux-focal-cuda12.1-py3.10-gcc9 != *rocm* ]] 2024-08-07T19:18:32.1134463Z + [[ linux-focal-cuda12.1-py3.10-gcc9 != *xla* ]] 2024-08-07T19:18:32.1140047Z ++ git status --porcelain 2024-08-07T19:18:32.1144910Z ++ grep -v '?? third_party' 2024-08-07T19:18:35.1095440Z ++ true 2024-08-07T19:18:35.1095826Z + git_status= 2024-08-07T19:18:35.1096153Z + [[ -n '' ]] 2024-08-07T19:18:35.1096461Z + cleanup_workspace 2024-08-07T19:18:35.1097086Z + echo 'sudo may print the following warning message that can be ignored. The chown command will still run.' 2024-08-07T19:18:35.1098030Z sudo may print the following warning message that can be ignored. The chown command will still run. 2024-08-07T19:18:35.1098876Z + echo ' sudo: setrlimit(RLIMIT_STACK): Operation not permitted' 2024-08-07T19:18:35.1099484Z sudo: setrlimit(RLIMIT_STACK): Operation not permitted 2024-08-07T19:18:35.1100144Z + echo 'For more details refer to https://github.com/sudo-project/sudo/issues/42' 2024-08-07T19:18:35.1100880Z For more details refer to https://github.com/sudo-project/sudo/issues/42 2024-08-07T19:18:35.1101448Z + sudo chown -R 1000 /var/lib/jenkins/workspace 2024-08-07T19:18:35.7215341Z ##[group]Run cat test/**/*_toprint.log || true 2024-08-07T19:18:35.7215894Z cat test/**/*_toprint.log || true 2024-08-07T19:18:35.7226795Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T19:18:35.7227308Z env: 2024-08-07T19:18:35.7227603Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:35.7228043Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:35.7228787Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:35.7229445Z ##[endgroup] 2024-08-07T19:18:35.7320526Z cat: 'test/**/*_toprint.log': No such file or directory 2024-08-07T19:18:35.7385156Z ##[group]Run kill "$MONITOR_SCRIPT_PID" 2024-08-07T19:18:35.7385682Z kill "$MONITOR_SCRIPT_PID" 2024-08-07T19:18:35.7393101Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T19:18:35.7393614Z env: 2024-08-07T19:18:35.7393919Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:35.7394367Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:35.7395642Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:35.7396337Z MONITOR_SCRIPT_PID: 89443 2024-08-07T19:18:35.7396674Z ##[endgroup] 2024-08-07T19:18:35.7573116Z Prepare all required actions 2024-08-07T19:18:35.7573712Z Getting action download info 2024-08-07T19:18:35.8810289Z Download action repository 'actions/upload-artifact@v3' (SHA:a8a3f3ad30e3422c9c7b888a15615d19a852ae32) 2024-08-07T19:18:36.0933259Z ##[group]Run ./.github/actions/upload-test-artifacts 2024-08-07T19:18:36.0933904Z with: 2024-08-07T19:18:36.0934397Z file-suffix: test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521 2024-08-07T19:18:36.0935039Z s3-bucket: gha-artifacts 2024-08-07T19:18:36.0935404Z env: 2024-08-07T19:18:36.0935684Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:36.0936158Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:36.0936958Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:36.0937580Z ##[endgroup] 2024-08-07T19:18:36.0979066Z ##[group]Run # Remove any previous test jsons if they exist 2024-08-07T19:18:36.0979735Z # Remove any previous test jsons if they exist 2024-08-07T19:18:36.0980256Z rm -f test-jsons-*.zip 2024-08-07T19:18:36.0980777Z zip -r "test-jsons-${FILE_SUFFIX}.zip" test -i '*.json' 2024-08-07T19:18:36.0987531Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T19:18:36.0988015Z env: 2024-08-07T19:18:36.0988312Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:36.0988737Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:36.0989626Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:36.0990449Z FILE_SUFFIX: test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521 2024-08-07T19:18:36.0991002Z ##[endgroup] 2024-08-07T19:18:36.1251764Z adding: test/allowlist_for_publicAPI.json (deflated 79%) 2024-08-07T19:18:36.1287131Z adding: test/benchmark_utils/callgrind_artifacts.json (deflated 92%) 2024-08-07T19:18:36.1288109Z adding: test/minioptest_failures_dict.json (deflated 70%) 2024-08-07T19:18:36.1296191Z adding: test/profiler/profiler_utils_mock_events.json (deflated 87%) 2024-08-07T19:18:36.1300998Z adding: test/test-reports/td_exclusions-1388d9edcbc5bb48b175.json (deflated 81%) 2024-08-07T19:18:36.1307637Z adding: test/.pytorch-slow-tests.json (deflated 81%) 2024-08-07T19:18:36.1323527Z adding: test/.pytorch-disabled-tests.json (deflated 89%) 2024-08-07T19:18:36.1365040Z ##[group]Run # Remove any previous test reports if they exist 2024-08-07T19:18:36.1365693Z # Remove any previous test reports if they exist 2024-08-07T19:18:36.1366200Z rm -f test-reports-*.zip 2024-08-07T19:18:36.1366784Z zip -r "test-reports-${FILE_SUFFIX}.zip" test -i '*.xml' -i '*.csv' 2024-08-07T19:18:36.1373876Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T19:18:36.1374374Z env: 2024-08-07T19:18:36.1374673Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:36.1375136Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:36.1375879Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:36.1376748Z FILE_SUFFIX: test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521 2024-08-07T19:18:36.1377357Z ##[endgroup] 2024-08-07T19:18:36.1609436Z adding: test/test-reports/python-pytest/test_transformers/test_transformers-9c11558a523e0933.xml (deflated 28%) 2024-08-07T19:18:36.2589737Z adding: test/test-reports/python-pytest/test_transformers/test_transformers-6a9eb05ef756150e.xml (deflated 99%) 2024-08-07T19:18:36.2590926Z adding: test/test-reports/python-pytest/test_transformers/test_transformers-efb1627476a74b05.xml (deflated 35%) 2024-08-07T19:18:36.2732604Z adding: test/test-reports/python-pytest/test_transformers/test_transformers-68dbd8fab867c5cc.xml (deflated 98%) 2024-08-07T19:18:36.2733763Z adding: test/test-reports/python-pytest/functorch.test_ops/functorch.test_ops-19fdf801c742757f.xml (deflated 28%) 2024-08-07T19:18:36.2735191Z adding: test/test-reports/python-pytest/functorch.test_ops/functorch.test_ops-e0fe62ebadf01b72.xml (deflated 28%) 2024-08-07T19:18:36.2764521Z adding: test/test-reports/python-pytest/functorch.test_ops/functorch.test_ops-390db331aa6a5188.xml (deflated 93%) 2024-08-07T19:18:36.2794710Z adding: test/test-reports/python-pytest/functorch.test_ops/functorch.test_ops-34adc148380538ce.xml (deflated 93%) 2024-08-07T19:18:36.2796358Z adding: test/test-reports/python-pytest/test_ops/test_ops-636d67bdb788961a.xml (deflated 28%) 2024-08-07T19:18:36.2797296Z adding: test/test-reports/python-pytest/test_ops/test_ops-4eb18a9d756d278c.xml (deflated 28%) 2024-08-07T19:18:36.2891588Z adding: test/test-reports/python-pytest/test_ops/test_ops-2cc0034c2de3ea9c.xml (deflated 96%) 2024-08-07T19:18:36.3033081Z adding: test/test-reports/python-pytest/test_ops/test_ops-c832da8d53d0b2ed.xml (deflated 97%) 2024-08-07T19:18:36.3034035Z adding: test/test-reports/python-pytest/test_decomp/test_decomp-4724d23c1f6b4db1.xml (deflated 28%) 2024-08-07T19:18:36.3035038Z adding: test/test-reports/python-pytest/test_decomp/test_decomp-85d63c7bd730d9e9.xml (deflated 27%) 2024-08-07T19:18:36.3036020Z adding: test/test-reports/python-pytest/test_decomp/test_decomp-76062bb55e80cc69.xml (deflated 28%) 2024-08-07T19:18:36.3037052Z adding: test/test-reports/python-pytest/test_decomp/test_decomp-1bb8c448dab1671c.xml (deflated 28%) 2024-08-07T19:18:36.3043849Z adding: test/test-reports/python-pytest/test_decomp/test_decomp-36ff22cbe4a6628b.xml (deflated 91%) 2024-08-07T19:18:36.3052041Z adding: test/test-reports/python-pytest/test_decomp/test_decomp-569eeffb4a11944c.xml (deflated 91%) 2024-08-07T19:18:36.3061759Z adding: test/test-reports/python-pytest/test_decomp/test_decomp-673ac4c4a6d1fe0f.xml (deflated 91%) 2024-08-07T19:18:36.3070794Z adding: test/test-reports/python-pytest/test_decomp/test_decomp-102a5591d99ae265.xml (deflated 91%) 2024-08-07T19:18:36.3071780Z adding: test/test-reports/python-pytest/test_modules/test_modules-dc7836262182043e.xml (deflated 28%) 2024-08-07T19:18:36.3194744Z adding: test/test-reports/python-pytest/test_modules/test_modules-c70bcb09eaabb2c3.xml (deflated 99%) 2024-08-07T19:18:36.3196211Z adding: test/test-reports/python-pytest/test_nestedtensor/test_nestedtensor-9590ea3d8ba9555e.xml (deflated 28%) 2024-08-07T19:18:36.3226185Z adding: test/test-reports/python-pytest/test_nestedtensor/test_nestedtensor-4f741c5713a55974.xml (deflated 96%) 2024-08-07T19:18:36.3227443Z adding: test/test-reports/python-pytest/inductor.test_torchinductor/inductor.test_torchinductor-6928065e824013a8.xml (deflated 28%) 2024-08-07T19:18:36.3234486Z adding: test/test-reports/python-pytest/inductor.test_torchinductor/inductor.test_torchinductor-859c67bd94c4e7df.xml (deflated 92%) 2024-08-07T19:18:36.3235622Z adding: test/test-reports/python-pytest/test_meta/test_meta-31ce06f6cbd5ddd4.xml (deflated 28%) 2024-08-07T19:18:36.3236610Z adding: test/test-reports/python-pytest/test_meta/test_meta-b10ef3e410a7cb25.xml (deflated 28%) 2024-08-07T19:18:36.3391520Z adding: test/test-reports/python-pytest/test_meta/test_meta-1d51358895991c18.xml (deflated 96%) 2024-08-07T19:18:36.3554443Z adding: test/test-reports/python-pytest/test_meta/test_meta-5d9269b52f59b1e7.xml (deflated 96%) 2024-08-07T19:18:36.3555720Z adding: test/test-reports/python-pytest/inductor.test_torchinductor_dynamic_shapes/inductor.test_torchinductor_dynamic_shapes-aa94731bc097d2ac.xml (deflated 28%) 2024-08-07T19:18:36.3562633Z adding: test/test-reports/python-pytest/inductor.test_torchinductor_dynamic_shapes/inductor.test_torchinductor_dynamic_shapes-36e45d3dafe04b35.xml (deflated 92%) 2024-08-07T19:18:36.3563955Z adding: test/test-reports/python-pytest/test_ops_jit/test_ops_jit-e8504e1c3ed87935.xml (deflated 28%) 2024-08-07T19:18:36.3572146Z adding: test/test-reports/python-pytest/test_ops_jit/test_ops_jit-77671e46f06fc097.xml (deflated 93%) 2024-08-07T19:18:36.3573267Z adding: test/test-reports/python-pytest/dynamo.test_skip_non_tensor/dynamo.test_skip_non_tensor-5a0c269e3d17b75b.xml (deflated 28%) 2024-08-07T19:18:36.3574784Z adding: test/test-reports/python-pytest/dynamo.test_skip_non_tensor/dynamo.test_skip_non_tensor-b41f3e328465e5f2.xml (deflated 75%) 2024-08-07T19:18:36.3576057Z adding: test/test-reports/python-pytest/dynamo.test_interop/dynamo.test_interop-d011d81ed84f1614.xml (deflated 28%) 2024-08-07T19:18:36.3577205Z adding: test/test-reports/python-pytest/dynamo.test_interop/dynamo.test_interop-9fbfd0c3892e8540.xml (deflated 70%) 2024-08-07T19:18:36.3578560Z adding: test/test-reports/python-pytest/inductor.test_extension_backend/inductor.test_extension_backend-0a833269b6d1205b.xml (deflated 28%) 2024-08-07T19:18:36.3579952Z adding: test/test-reports/python-pytest/inductor.test_extension_backend/inductor.test_extension_backend-38459b0366bbb235.xml (deflated 52%) 2024-08-07T19:18:36.3581374Z adding: test/test-reports/python-pytest/inductor.test_compiled_optimizers/inductor.test_compiled_optimizers-a82e3b89ea126072.xml (deflated 28%) 2024-08-07T19:18:36.3591977Z adding: test/test-reports/python-pytest/inductor.test_compiled_optimizers/inductor.test_compiled_optimizers-f2800fe674fbea60.xml (deflated 97%) 2024-08-07T19:18:36.3593242Z adding: test/test-reports/python-pytest/export.test_tools/export.test_tools-6927a3ea9d371d75.xml (deflated 28%) 2024-08-07T19:18:36.3594348Z adding: test/test-reports/python-pytest/export.test_tools/export.test_tools-74619ef51408b7f0.xml (deflated 48%) 2024-08-07T19:18:36.3596197Z adding: test/test-reports/python-pytest/dynamo.test_inline_inbuilt_nn_modules/dynamo.test_inline_inbuilt_nn_modules-ba4a374c238fbec6.xml (deflated 28%) 2024-08-07T19:18:36.3654250Z adding: test/test-reports/python-pytest/dynamo.test_inline_inbuilt_nn_modules/dynamo.test_inline_inbuilt_nn_modules-b9e9be465ffacf82.xml (deflated 92%) 2024-08-07T19:18:36.3698583Z ##[group]Run # Remove any previous usage logs if they exist 2024-08-07T19:18:36.3699179Z # Remove any previous usage logs if they exist 2024-08-07T19:18:36.3699650Z rm -f logs-*.zip 2024-08-07T19:18:36.3700221Z # this workflow is also run in bazel build test, but we dont generate usage reports for it 2024-08-07T19:18:36.3700895Z # so check to see if the file exists first 2024-08-07T19:18:36.3701358Z if [ -f 'usage_log.txt' ]; then 2024-08-07T19:18:36.3701815Z  zip "logs-${FILE_SUFFIX}.zip" 'usage_log.txt' 2024-08-07T19:18:36.3702271Z fi 2024-08-07T19:18:36.3702624Z if ls test/**/*.log 1> /dev/null 2>&1; then 2024-08-07T19:18:36.3703115Z  zip -r "logs-${FILE_SUFFIX}.zip" test -i '*.log' 2024-08-07T19:18:36.3703570Z fi 2024-08-07T19:18:36.3710537Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T19:18:36.3711006Z env: 2024-08-07T19:18:36.3711302Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:36.3711741Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:36.3712435Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:36.3713271Z FILE_SUFFIX: test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521 2024-08-07T19:18:36.3713851Z ##[endgroup] 2024-08-07T19:18:36.3814695Z adding: usage_log.txt (deflated 92%) 2024-08-07T19:18:36.4076251Z adding: test/test-reports/test_transformers_1.1_10084dc1b049f7b6_.log (deflated 50%) 2024-08-07T19:18:36.4077101Z adding: test/test-reports/functorch.test_ops_2.9_31d3a02af24914a0_.log (deflated 49%) 2024-08-07T19:18:36.4077947Z adding: test/test-reports/functorch.test_ops_7.9_1766e083f3ab9b5c_.log (deflated 49%) 2024-08-07T19:18:36.4078720Z adding: test/test-reports/test_ops_2.11_f1fa1d6bfcf834f8_.log (deflated 49%) 2024-08-07T19:18:36.4079459Z adding: test/test-reports/test_ops_7.11_258bfcd3a64223ff_.log (deflated 49%) 2024-08-07T19:18:36.4080208Z adding: test/test-reports/test_decomp_1.19_80f2be07e1945c8f_.log (deflated 48%) 2024-08-07T19:18:36.4080960Z adding: test/test-reports/test_decomp_6.19_7a2ea32614883937_.log (deflated 48%) 2024-08-07T19:18:36.4081917Z adding: test/test-reports/test_decomp_11.19_ba415fa6601c404d_.log (deflated 48%) 2024-08-07T19:18:36.4082720Z adding: test/test-reports/test_decomp_16.19_2af6651f83e467a6_.log (deflated 48%) 2024-08-07T19:18:36.4083483Z adding: test/test-reports/test_modules_2.2_ff763601b12f1bfe_.log (deflated 48%) 2024-08-07T19:18:36.4084257Z adding: test/test-reports/test_nestedtensor_1.1_4bff0340dfef71ef_.log (deflated 50%) 2024-08-07T19:18:36.4085229Z adding: test/test-reports/inductor.test_torchinductor_3.4_563b8b34bc219cdf_.log (deflated 51%) 2024-08-07T19:18:36.4086049Z adding: test/test-reports/test_meta_1.5_833b5079e16ce8ea_.log (deflated 49%) 2024-08-07T19:18:36.4086792Z adding: test/test-reports/test_meta_5.5_02b6909cecf74fc4_.log (deflated 49%) 2024-08-07T19:18:36.4087676Z adding: test/test-reports/inductor.test_torchinductor_dynamic_shapes_3.4_3a3cab365abd6929_.log (deflated 53%) 2024-08-07T19:18:36.4088667Z adding: test/test-reports/inductor.test_cuda_cpp_wrapper_1.1_f4a036acdad6717e_.log (stored 0%) 2024-08-07T19:18:36.4089512Z adding: test/test-reports/test_ops_jit_3.3_65f233e182309ddc_.log (deflated 49%) 2024-08-07T19:18:36.4090322Z adding: test/test-reports/dynamo.test_skip_non_tensor_1.1_24e753f022ca7b5d_.log (deflated 51%) 2024-08-07T19:18:36.4091194Z adding: test/test-reports/dynamo.test_interop_1.1_91d0225ded58e1d4_.log (deflated 50%) 2024-08-07T19:18:36.4092097Z adding: test/test-reports/inductor.test_extension_backend_1.1_0c1a11fe1311aff7_.log (deflated 51%) 2024-08-07T19:18:36.4093144Z adding: test/test-reports/inductor.test_compiled_optimizers_1.1_5139c0d6e7a9a7d5_.log (deflated 52%) 2024-08-07T19:18:36.4094036Z adding: test/test-reports/export.test_tools_1.1_3b88329f73a82780_.log (deflated 50%) 2024-08-07T19:18:36.4094931Z adding: test/test-reports/dynamo.test_inline_inbuilt_nn_modules_1.1_7b8541974312d1a3_.log (deflated 52%) 2024-08-07T19:18:36.4096294Z adding: test/test-reports/inductor.test_move_constructors_to_cuda_1.1_2a68e6ea6d2600c6_.log (stored 0%) 2024-08-07T19:18:36.5599433Z adding: test/test-reports/test_transformers_1.1_2ac14b314d452749_.log (deflated 98%) 2024-08-07T19:18:36.5634982Z adding: test/test-reports/functorch.test_ops_2.9_0da5ccb26741bd7a_.log (deflated 92%) 2024-08-07T19:18:36.5668618Z adding: test/test-reports/functorch.test_ops_7.9_f92badfde39bc759_.log (deflated 92%) 2024-08-07T19:18:36.5758121Z adding: test/test-reports/test_ops_2.11_88df29a74f745b59_.log (deflated 92%) 2024-08-07T19:18:36.5847227Z adding: test/test-reports/test_ops_7.11_83a4b96c49e2cadd_.log (deflated 92%) 2024-08-07T19:18:36.5862043Z adding: test/test-reports/test_decomp_6.19_8cbf9f879dfc1640_.log (deflated 89%) 2024-08-07T19:18:36.5875225Z adding: test/test-reports/test_decomp_1.19_e0ec0d2b7659c95d_.log (deflated 89%) 2024-08-07T19:18:36.5890958Z adding: test/test-reports/test_decomp_11.19_d3ddd556460f341c_.log (deflated 90%) 2024-08-07T19:18:36.5905704Z adding: test/test-reports/test_decomp_16.19_a509a51586ebc7b6_.log (deflated 89%) 2024-08-07T19:18:36.5948509Z adding: test/test-reports/test_nestedtensor_1.1_f8f817cb989c2891_.log (deflated 95%) 2024-08-07T19:18:36.5996409Z adding: test/test-reports/test_modules_2.2_07adb4607eb49a41_.log (deflated 94%) 2024-08-07T19:18:36.6002148Z adding: test/test-reports/inductor.test_torchinductor_3.4_0f3db564f79be0bd_.log (deflated 87%) 2024-08-07T19:18:36.6222555Z adding: test/test-reports/test_meta_1.5_1dc589540d194270_.log (deflated 94%) 2024-08-07T19:18:36.6229706Z adding: test/test-reports/inductor.test_torchinductor_dynamic_shapes_3.4_78a0d962c2e1239e_.log (deflated 91%) 2024-08-07T19:18:36.6230704Z adding: test/test-reports/inductor.test_cuda_cpp_wrapper_1.1_5bc913c3d0b0a585_.log (stored 0%) 2024-08-07T19:18:36.6455903Z adding: test/test-reports/test_meta_5.5_d6d8ec1fb3599b2f_.log (deflated 94%) 2024-08-07T19:18:36.6456752Z adding: test/test-reports/dynamo.test_skip_non_tensor_1.1_2e14e453c00ee288_.log (deflated 72%) 2024-08-07T19:18:36.6457641Z adding: test/test-reports/dynamo.test_interop_1.1_3a0275a630e9b103_.log (deflated 60%) 2024-08-07T19:18:36.6458742Z adding: test/test-reports/inductor.test_extension_backend_1.1_3016a201a34a0504_.log (deflated 60%) 2024-08-07T19:18:36.6470362Z adding: test/test-reports/test_ops_jit_3.3_b9d3a7b04fcbc5d1_.log (deflated 90%) 2024-08-07T19:18:36.6471165Z adding: test/test-reports/export.test_tools_1.1_6574ec9db8e5b0e2_.log (deflated 62%) 2024-08-07T19:18:36.6488270Z adding: test/test-reports/inductor.test_compiled_optimizers_1.1_6c23bac92bd7cf38_.log (deflated 95%) 2024-08-07T19:18:36.6489260Z adding: test/test-reports/inductor.test_move_constructors_to_cuda_1.1_131488603c74d17c_.log (stored 0%) 2024-08-07T19:18:36.6530984Z adding: test/test-reports/dynamo.test_inline_inbuilt_nn_modules_1.1_ffc2ed4ef395d7da_.log (deflated 93%) 2024-08-07T19:18:36.6572762Z ##[group]Run # Remove any previous debugging artifacts if they exist 2024-08-07T19:18:36.6573508Z # Remove any previous debugging artifacts if they exist 2024-08-07T19:18:36.6574055Z rm -f debug-*.zip 2024-08-07T19:18:36.6574449Z if [ -d 'test/debug' ]; then 2024-08-07T19:18:36.6574914Z  zip -r "debug-${FILE_SUFFIX}.zip" test/debug 2024-08-07T19:18:36.6575338Z fi 2024-08-07T19:18:36.6582149Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T19:18:36.6582634Z env: 2024-08-07T19:18:36.6582919Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:36.6583351Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:36.6584223Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:36.6585025Z FILE_SUFFIX: test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521 2024-08-07T19:18:36.6585592Z ##[endgroup] 2024-08-07T19:18:36.6742081Z ##[group]Run seemethere/upload-artifact-s3@v5 2024-08-07T19:18:36.6742539Z with: 2024-08-07T19:18:36.6742847Z s3-bucket: gha-artifacts 2024-08-07T19:18:36.6743276Z s3-prefix: pytorch/pytorch/10288745067/1/artifact 2024-08-07T19:18:36.6743754Z retention-days: 14 2024-08-07T19:18:36.6744122Z if-no-files-found: warn 2024-08-07T19:18:36.6744492Z path: test-jsons-*.zip 2024-08-07T19:18:36.6744859Z name: artifact 2024-08-07T19:18:36.6745169Z region: us-east-1 2024-08-07T19:18:36.6745495Z env: 2024-08-07T19:18:36.6745791Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:36.6746231Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:36.6746988Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:36.6747651Z ##[endgroup] 2024-08-07T19:18:37.1647279Z NOTE: s3-prefix specified, ignoring name parameter 2024-08-07T19:18:37.1648185Z With the provided path, there will be 1 file uploaded 2024-08-07T19:18:37.1648763Z Uploading to s3 prefix: pytorch/pytorch/10288745067/1/artifact 2024-08-07T19:18:37.1705343Z Starting upload of test-jsons-test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521.zip 2024-08-07T19:18:37.3077519Z Finished upload of test-jsons-test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521.zip 2024-08-07T19:18:37.3290251Z ##[group]Run seemethere/upload-artifact-s3@v5 2024-08-07T19:18:37.3290709Z with: 2024-08-07T19:18:37.3290994Z s3-bucket: gha-artifacts 2024-08-07T19:18:37.3291441Z s3-prefix: pytorch/pytorch/10288745067/1/artifact 2024-08-07T19:18:37.3291916Z retention-days: 14 2024-08-07T19:18:37.3292274Z if-no-files-found: error 2024-08-07T19:18:37.3292660Z path: test-reports-*.zip 2024-08-07T19:18:37.3293041Z name: artifact 2024-08-07T19:18:37.3293344Z region: us-east-1 2024-08-07T19:18:37.3293660Z env: 2024-08-07T19:18:37.3293952Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:37.3294397Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:37.3295618Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:37.3296288Z ##[endgroup] 2024-08-07T19:18:37.7877348Z NOTE: s3-prefix specified, ignoring name parameter 2024-08-07T19:18:37.7878372Z With the provided path, there will be 1 file uploaded 2024-08-07T19:18:37.7879287Z Uploading to s3 prefix: pytorch/pytorch/10288745067/1/artifact 2024-08-07T19:18:37.7932525Z Starting upload of test-reports-test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521.zip 2024-08-07T19:18:38.0107795Z Finished upload of test-reports-test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521.zip 2024-08-07T19:18:38.0314585Z ##[group]Run seemethere/upload-artifact-s3@v5 2024-08-07T19:18:38.0315052Z with: 2024-08-07T19:18:38.0315333Z s3-bucket: gha-artifacts 2024-08-07T19:18:38.0315780Z s3-prefix: pytorch/pytorch/10288745067/1/artifact 2024-08-07T19:18:38.0316256Z retention-days: 14 2024-08-07T19:18:38.0316592Z if-no-files-found: ignore 2024-08-07T19:18:38.0316961Z path: logs-*.zip 2024-08-07T19:18:38.0317285Z name: artifact 2024-08-07T19:18:38.0317585Z region: us-east-1 2024-08-07T19:18:38.0317902Z env: 2024-08-07T19:18:38.0318196Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:38.0318647Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:38.0319409Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:38.0320090Z ##[endgroup] 2024-08-07T19:18:38.4830386Z NOTE: s3-prefix specified, ignoring name parameter 2024-08-07T19:18:38.4831021Z With the provided path, there will be 1 file uploaded 2024-08-07T19:18:38.4831630Z Uploading to s3 prefix: pytorch/pytorch/10288745067/1/artifact 2024-08-07T19:18:38.4884179Z Starting upload of logs-test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521.zip 2024-08-07T19:18:38.6855006Z Finished upload of logs-test-default-3-5-amz2023.linux.4xlarge.nvidia.gpu_28476182521.zip 2024-08-07T19:18:38.7055032Z ##[group]Run seemethere/upload-artifact-s3@v5 2024-08-07T19:18:38.7055491Z with: 2024-08-07T19:18:38.7055776Z s3-bucket: gha-artifacts 2024-08-07T19:18:38.7056216Z s3-prefix: pytorch/pytorch/10288745067/1/artifact 2024-08-07T19:18:38.7056696Z retention-days: 14 2024-08-07T19:18:38.7057026Z if-no-files-found: ignore 2024-08-07T19:18:38.7057423Z path: debug-*.zip 2024-08-07T19:18:38.7057750Z name: artifact 2024-08-07T19:18:38.7058050Z region: us-east-1 2024-08-07T19:18:38.7058368Z env: 2024-08-07T19:18:38.7058660Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:38.7059102Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:38.7059856Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:38.7060538Z ##[endgroup] 2024-08-07T19:18:39.1564786Z No files were found with the provided path: debug-*.zip. No artifacts will be uploaded. 2024-08-07T19:18:39.1773783Z ##[group]Run # shellcheck disable=SC2156 2024-08-07T19:18:39.1774317Z # shellcheck disable=SC2156 2024-08-07T19:18:39.1775092Z find . -iname "core.[1-9]*" -exec docker exec "${DOCKER_CONTAINER_ID}" sh -c "gdb python {} -ex 'bt' -ex 'q'" \; 2024-08-07T19:18:39.1782939Z shell: /usr/bin/bash -e {0} 2024-08-07T19:18:39.1783304Z env: 2024-08-07T19:18:39.1783609Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:39.1784073Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:39.1784811Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:39.1785478Z ##[endgroup] 2024-08-07T19:18:39.4603714Z ##[group]Run pytorch/test-infra/.github/actions/teardown-linux@main 2024-08-07T19:18:39.4604339Z with: 2024-08-07T19:18:39.4604611Z env: 2024-08-07T19:18:39.4604920Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:39.4605387Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:39.4606125Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:39.4606796Z ##[endgroup] 2024-08-07T19:18:39.4636367Z ##[group]Run set -eou pipefail 2024-08-07T19:18:39.4636822Z set -eou pipefail 2024-08-07T19:18:39.4637159Z  2024-08-07T19:18:39.4637625Z echo "Holding runner for 2 hours until all ssh sessions have logged out" 2024-08-07T19:18:39.4638198Z for _ in $(seq 1440); do 2024-08-07T19:18:39.4638603Z  # Break if no ssh session exists anymore 2024-08-07T19:18:39.4639048Z  if [ "$(who)" = "" ]; then 2024-08-07T19:18:39.4639435Z  break 2024-08-07T19:18:39.4639792Z  fi 2024-08-07T19:18:39.4640081Z  echo "." 2024-08-07T19:18:39.4640543Z  sleep 5 2024-08-07T19:18:39.4640856Z done 2024-08-07T19:18:39.4647612Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T19:18:39.4648094Z env: 2024-08-07T19:18:39.4648384Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:39.4648792Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:39.4649495Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:39.4652266Z ##[endgroup] 2024-08-07T19:18:39.4680231Z Holding runner for 2 hours until all ssh sessions have logged out 2024-08-07T19:18:39.4787154Z ##[group]Run # ignore expansion of "docker ps -q" since it could be empty 2024-08-07T19:18:39.4787903Z # ignore expansion of "docker ps -q" since it could be empty 2024-08-07T19:18:39.4788483Z # shellcheck disable=SC2046 2024-08-07T19:18:39.4788943Z docker stop $(docker ps -q) || true 2024-08-07T19:18:39.4789429Z # Prune all of the docker images 2024-08-07T19:18:39.4789883Z docker system prune -af 2024-08-07T19:18:39.4797291Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T19:18:39.4797778Z env: 2024-08-07T19:18:39.4798070Z GIT_DEFAULT_BRANCH: main 2024-08-07T19:18:39.4798482Z GPU_FLAG: --gpus all -e NVIDIA_DRIVER_CAPABILITIES=all 2024-08-07T19:18:39.4799189Z DOCKER_CONTAINER_ID: b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:39.4799821Z ##[endgroup] 2024-08-07T19:18:40.1346616Z b555cd11eec4 2024-08-07T19:18:40.7042756Z Deleted Containers: 2024-08-07T19:18:40.7043333Z b555cd11eec4bb5fd3878b5cca27da60da0644bd5e932170a5ffd8aab7f46d25 2024-08-07T19:18:40.7043762Z 2024-08-07T19:18:46.2329053Z Deleted Images: 2024-08-07T19:18:46.2330795Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9:02ec4fbd5adcb3fb91cf5ce431dec18b633de7d9 2024-08-07T19:18:46.2333199Z untagged: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-focal-cuda12.1-cudnn9-py3-gcc9@sha256:00f47b036f588ca5ef8866f8635fabba5a95cdf9ff1adae7d2a674ef1d4076e9 2024-08-07T19:18:46.2334526Z deleted: sha256:6ec36276acd88c9be8b44d856744037d399b35f4bb1703e637c27ae2b254c901 2024-08-07T19:18:46.2335316Z deleted: sha256:6fbc5fe2ebb0dc33846ab2ade7c5296a0a521e16f71c3b15b8a0c40a8fce5ed3 2024-08-07T19:18:46.2336136Z deleted: sha256:31fb5fadd0be2cc6e0d198fbc00e4f3c25925bc9c9ce79be23f02aeb56f6a55a 2024-08-07T19:18:46.2336944Z deleted: sha256:d1d89c6e648d792c08fdaae4fdad1273f93571a1b8d03c76f38f9e2be7cfe7f1 2024-08-07T19:18:46.2337796Z deleted: sha256:0f1adf9d1a1d4eeb62caa063c1090ea2f50246ed031d8d6627df2f5fe5963067 2024-08-07T19:18:46.2338592Z deleted: sha256:771e69c2306df070c5a944ea3043f81ce5890ce90de66fb6f16edfaf09912b35 2024-08-07T19:18:46.2339367Z deleted: sha256:288aa01dcd110ad9e2e3f48756d987529733a41225b6e8d4898f7386666beeae 2024-08-07T19:18:46.2340539Z deleted: sha256:d9d7c6c9bef79d8c5ce30d89b81820f3664c04fe17ae396b2a121e96fcfdeccf 2024-08-07T19:18:46.2341376Z deleted: sha256:1efa95271d86fd98beb626e63f4c67732daca36bf6e25d864735e43ef4c708f4 2024-08-07T19:18:46.2342172Z deleted: sha256:b1699f7cf1967593f94664569bb49ba431deaac0d1ceaf0d0584046a78ca3be0 2024-08-07T19:18:46.2342960Z deleted: sha256:b70eb1cb717d1c2ce90d5fa7f48e1af32d45f3735b33e1f944762686d8459aa6 2024-08-07T19:18:46.2343726Z deleted: sha256:6460a71f50d3c4bf7773fa072665e881e2a7e0e4e0bd45302e6a73c35ec03898 2024-08-07T19:18:46.2344497Z deleted: sha256:1c1787b4980844f6bfea1fe2386125ac286e9dfd80e253164a7f82b90f9c37bc 2024-08-07T19:18:46.2345290Z deleted: sha256:758fb95d0e1c4b4c78ee59fc4d66c06acfbc3a3ffa870e94200529367b8998e8 2024-08-07T19:18:46.2346071Z deleted: sha256:99beb19b8911fc7c49d248960a97d8ba4bbcbe8afc9fe3142e5af37f08c1c821 2024-08-07T19:18:46.2346861Z deleted: sha256:55596f5748c7f4d8e4d9fea0d1a6cda4627ccc442e652c8578169efe5992a382 2024-08-07T19:18:46.2347659Z deleted: sha256:c279b10b40b22a2fba8aeaad85e7705ffb1afa28e000323b35cc947649b7637d 2024-08-07T19:18:46.2348438Z deleted: sha256:8c882a49501c1d973eacd2ed8da81bce818403153d3fb4cdc84baf14307f9517 2024-08-07T19:18:46.2349224Z deleted: sha256:9fdbb157fcea7486d127e106ff114401c3102b9e9ed879a59247a296b3a908af 2024-08-07T19:18:46.2350008Z deleted: sha256:25b821c2e5d4106b0214b3aa4c88265bda5f90f045f012af912f0a3d2979f919 2024-08-07T19:18:46.2351035Z deleted: sha256:fada342ffd1729d0c38b7afe6767d6840580548d8dac7c62ddf61e24803ad66e 2024-08-07T19:18:46.2352105Z deleted: sha256:14d55c394ec592bb1417a21f696089e0549403bff381cda63d3e92ec80bca298 2024-08-07T19:18:46.2353061Z deleted: sha256:1032dedb70a12fc3c1cae4b10cdd7a07a9d20a64736533d640dd133d46f1ebfe 2024-08-07T19:18:46.2353827Z deleted: sha256:c0f5baad08f078d4210c254240524d9063c5e797564ada8a4bf39270e0fc1300 2024-08-07T19:18:46.2354606Z deleted: sha256:1a8bc02cd2c8f897e1dfd14920703ea75d172660c85026068b39136a1a25db51 2024-08-07T19:18:46.2355376Z deleted: sha256:b2221232f25b8d3a916bf8f74248542af7830e6198814ffbdbab43484bcb700b 2024-08-07T19:18:46.2356162Z deleted: sha256:14dc7064caf62d3d51b65e5175f9ae0e31af4fbed83413736f80d881f0fd742e 2024-08-07T19:18:46.2356926Z deleted: sha256:162ef080ee355a240e7e8fd6761113ad61586e1c883207ba4290c90abf208b54 2024-08-07T19:18:46.2357830Z deleted: sha256:56e3e897838040eb3fb86255ef1be2e3be6705d2e4a1ead67bbd400350ff6d13 2024-08-07T19:18:46.2358751Z deleted: sha256:a4f1392c3713fac843566ae891b9e41c28824e377af84190d33a4132cd4b268d 2024-08-07T19:18:46.2359512Z deleted: sha256:d1a8ab1bb8f3238d77db318f5161058057f84f7357afd2932e1d7edded9a2efb 2024-08-07T19:18:46.2360297Z deleted: sha256:7cab46d23f9fd91e62280b25d274d6c8a03be6d66030adf598944913d280c54f 2024-08-07T19:18:46.2361082Z deleted: sha256:764da7f97ae4318f5157e363c03c86c016b8d1b6d2c4758d203fa8f194753f57 2024-08-07T19:18:46.2361842Z deleted: sha256:373ab2f02049fb6ccab3f886c53934457751a14e5cb68b5e144b04bb7afeed87 2024-08-07T19:18:46.2362634Z deleted: sha256:e46d90cf4cc84661ac93ae9bcfbc675c0fab0582328e4595d075e4f259c4389a 2024-08-07T19:18:46.2363411Z deleted: sha256:1579b987297253e32b13fe0d160a95c567910203783be2efcc3a76650255e658 2024-08-07T19:18:46.2364168Z deleted: sha256:f8542342382d7443bc8be95e90ac743f70abdafcd9987e3785f58ad50509a145 2024-08-07T19:18:46.2365267Z deleted: sha256:57432242bf8090a0c1238a5f004be4da246f4488a2193be182e5cd82237509e4 2024-08-07T19:18:46.2366069Z deleted: sha256:e3c369d15f59fb5f34178d8dcb6bb4a6ff1d527e9baf3182444c24c55600bdf4 2024-08-07T19:18:46.2366853Z deleted: sha256:4215189c6756d59c8a217cc09447c312318d07c0b0d5c9bfb7e4bfb942f05cae 2024-08-07T19:18:46.2367595Z deleted: sha256:b787e6329788262404e4b8293a110727833b63e2e287ccd569a21cb3fb450388 2024-08-07T19:18:46.2368385Z deleted: sha256:69bb5dca4b9fded8d0c731b73bcbed0c3e9ce170c89e79796956a444b0d58c4c 2024-08-07T19:18:46.2369183Z deleted: sha256:8c945f56e57fb4fbfa3f2c74d6109a47db8df7644c4988d0c5791e4214bf30c9 2024-08-07T19:18:46.2369952Z deleted: sha256:fdd079e0e07e11d86ffc744f1036793f3db2aea378660f7489fbf50002439620 2024-08-07T19:18:46.2370877Z deleted: sha256:6a743a461982e1a1b73f29cd187e354f2119f3bd985e1da6ca0b802134cb91ba 2024-08-07T19:18:46.2371678Z deleted: sha256:7b3cbce1a91c69d6a499ef524311c14537930fc04f0ef4b1d73030216ac4a568 2024-08-07T19:18:46.2372479Z deleted: sha256:d7120733e426141c8a9e8f2c3596b12055cd5b1956d141ad640365cb11628a00 2024-08-07T19:18:46.2373231Z deleted: sha256:841ae666d028560ea37e6314ae5f80d72fb071fe7592b577dbbd8156c08cdda2 2024-08-07T19:18:46.2374018Z deleted: sha256:9d85a8c4ea0c0bc631a4d3e5472e8ed8e2312a4dc5a38f6e9890edeb37183186 2024-08-07T19:18:46.2374806Z deleted: sha256:606f6418680b5c53158271dfaca16c668f3b9dd25bb2b738f5621b6a56d08cdb 2024-08-07T19:18:46.2375591Z deleted: sha256:5ca4df537c3efc44dfde3bebb377ed490bc4d1cd5d6e5fa2fd9549dcb456c471 2024-08-07T19:18:46.2376390Z deleted: sha256:c6f3f6b7969e62b95fe926a1963e6628cb2f1b5388f0b12115564855e59423f1 2024-08-07T19:18:46.2377172Z deleted: sha256:ce178391f5f25e4ebe6ba8e84c84831fb31b8a9d4e82dbc270a73ddeccbf1c2c 2024-08-07T19:18:46.2377961Z deleted: sha256:d154e0a00edb13898d564f38c090096dfeb7a90beb0bd8addd57eca33f52151e 2024-08-07T19:18:46.2378754Z deleted: sha256:7226a8914ccef500b1d0327370cf9e3a66fbcb0b59a5f3b018981372152fc645 2024-08-07T19:18:46.2379531Z deleted: sha256:ce5fab2081603cd22d9f253d98828ddd60c0f8c44c5413d0615f4264f1cd6a7b 2024-08-07T19:18:46.2380312Z deleted: sha256:b3bc21b2cb6a9ca2b1dbbc412812581089c8ace8cc6b8d2b2767f0b3cbe8b99c 2024-08-07T19:18:46.2381103Z deleted: sha256:53e8f21eb1afc2c8550bef59d67810a5f79671d1d3d3924577f810321e66886b 2024-08-07T19:18:46.2381883Z deleted: sha256:f9770918a5cc46e8f6aa13da6903754fa89a6bcc9f6e11df1d61f9340e452cc8 2024-08-07T19:18:46.2382657Z deleted: sha256:bce2de65cb97e6a64f59fe7fc78644a2f3d62cf7769073cb59dcbb52009fc5d0 2024-08-07T19:18:46.2383525Z deleted: sha256:568d161c6888a8b0b09d7feb561f790a31d47c7ff7ca1c89c0ff4025f5b02e3a 2024-08-07T19:18:46.2384304Z deleted: sha256:2da22d4dfb814e7a3f4444f60464109149d35bc3e078e197f600bd9f6cbb9f6b 2024-08-07T19:18:46.2385083Z deleted: sha256:4af2daf4a1277af00005823106d88d704d5d0499eef553f01875c6f94380b2e3 2024-08-07T19:18:46.2385853Z deleted: sha256:e52d382a6cbfc1bb5122186d4b1401de79de002a9cc323ba5047dbf266f4d3a3 2024-08-07T19:18:46.2386642Z deleted: sha256:d10fe93dc5314d752e8882c5877457a6ae733a93a1e246cd6f79301de9325e9c 2024-08-07T19:18:46.2387492Z deleted: sha256:58d5ecb60ded3f99102b25d909a729386d92a44bf68b77ed3b49bf27d978e26a 2024-08-07T19:18:46.2388264Z deleted: sha256:dd522c9c6fdd98e9e4584578a66d9122b36ee8f856bba2140fcaaf908c7a68e8 2024-08-07T19:18:46.2389043Z deleted: sha256:3584fb50d1cbc80b57118953f1dc36ecb092b5986d56211f711b9161f403d66c 2024-08-07T19:18:46.2389824Z deleted: sha256:35a033279d5c9bd6b862bfa9331d8fcbf98bbaf778a778fabc882384db9204f8 2024-08-07T19:18:46.2390598Z deleted: sha256:ab9b00c496073d62e69e032178858d3d6d4c4bab9e87065d3785c6da57351d00 2024-08-07T19:18:46.2391365Z deleted: sha256:7ed947299e109c2459de6e240d86f049c30f93f2380659ce441d8737b8cd065f 2024-08-07T19:18:46.2392142Z deleted: sha256:410d5a7f7a9a1cb4551c106b9cc728a9dcff598f9be231a351ce6a0a33f81e64 2024-08-07T19:18:46.2392919Z deleted: sha256:5f0021bb56efa14bb93978c01513e2a1187ba30e69bdb0546d0ef39b30873f88 2024-08-07T19:18:46.2393707Z deleted: sha256:c1601aa97eb84151c14da3aeea351201bd99144d36d66b397f6555d80245d86d 2024-08-07T19:18:46.2394509Z deleted: sha256:72af05a89be22accdc1ca5d66dcbbb33993a9ef5997f849df1d4ba4c48049f25 2024-08-07T19:18:46.2396135Z deleted: sha256:38b90c9663dcaa2bc57a4dd3008298e7ea93e9535fa42312b4dc4246e7491af9 2024-08-07T19:18:46.2396950Z deleted: sha256:bda61b6cefb3ec8eeb74fef1ca1c7f9a5845fe5e8f07b8123323e897425d5c29 2024-08-07T19:18:46.2397749Z deleted: sha256:5a18e1aa877074529a84cbddf19f8d5403787823378ceae6b72fb62f78d43037 2024-08-07T19:18:46.2398559Z deleted: sha256:6c3e7df31590f02f10cb71fc4eb27653e9b428df2e6e5421a455b062bd2e39f9 2024-08-07T19:18:46.2399019Z 2024-08-07T19:18:46.2399171Z Total reclaimed space: 25.18GB 2024-08-07T19:18:46.2470003Z Post job cleanup. 2024-08-07T19:18:46.2548838Z Post job cleanup. 2024-08-07T19:18:46.3737119Z [command]/usr/bin/git version 2024-08-07T19:18:46.3797475Z git version 2.40.1 2024-08-07T19:18:46.3859727Z Temporarily overriding HOME='/home/ec2-user/actions-runner/_work/_temp/71d60b38-1737-4402-b507-bf7d5a841b3b' before making global git config changes 2024-08-07T19:18:46.3860906Z Adding repository directory to the temporary git global config as a safe directory 2024-08-07T19:18:46.3867319Z [command]/usr/bin/git config --global --add safe.directory /home/ec2-user/actions-runner/_work/pytorch/pytorch 2024-08-07T19:18:46.3915634Z [command]/usr/bin/git config --local --name-only --get-regexp core\.sshCommand 2024-08-07T19:18:46.3955598Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'core\.sshCommand' && git config --local --unset-all 'core.sshCommand' || :" 2024-08-07T19:18:46.4313422Z Entering 'android/libs/fbjni' 2024-08-07T19:18:46.4382929Z Entering 'third_party/FP16' 2024-08-07T19:18:46.4448853Z Entering 'third_party/FXdiv' 2024-08-07T19:18:46.4513811Z Entering 'third_party/NNPACK' 2024-08-07T19:18:46.4578870Z Entering 'third_party/VulkanMemoryAllocator' 2024-08-07T19:18:46.4645956Z Entering 'third_party/XNNPACK' 2024-08-07T19:18:46.4730960Z Entering 'third_party/benchmark' 2024-08-07T19:18:46.4796426Z Entering 'third_party/cpp-httplib' 2024-08-07T19:18:46.4859643Z Entering 'third_party/cpuinfo' 2024-08-07T19:18:46.4925394Z Entering 'third_party/cudnn_frontend' 2024-08-07T19:18:46.4990227Z Entering 'third_party/cutlass' 2024-08-07T19:18:46.5065288Z Entering 'third_party/eigen' 2024-08-07T19:18:46.5133330Z Entering 'third_party/fbgemm' 2024-08-07T19:18:46.5201280Z Entering 'third_party/fbgemm/third_party/asmjit' 2024-08-07T19:18:46.5263067Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2024-08-07T19:18:46.5328189Z Entering 'third_party/fbgemm/third_party/cutlass' 2024-08-07T19:18:46.5399507Z Entering 'third_party/fbgemm/third_party/googletest' 2024-08-07T19:18:46.5463072Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2024-08-07T19:18:46.5527627Z Entering 'third_party/flatbuffers' 2024-08-07T19:18:46.5594692Z Entering 'third_party/fmt' 2024-08-07T19:18:46.5659005Z Entering 'third_party/foxi' 2024-08-07T19:18:46.5724420Z Entering 'third_party/gemmlowp/gemmlowp' 2024-08-07T19:18:46.5789424Z Entering 'third_party/gloo' 2024-08-07T19:18:46.5855071Z Entering 'third_party/googletest' 2024-08-07T19:18:46.5920347Z Entering 'third_party/ideep' 2024-08-07T19:18:46.5983088Z Entering 'third_party/ideep/mkl-dnn' 2024-08-07T19:18:46.6055146Z Entering 'third_party/ittapi' 2024-08-07T19:18:46.6120293Z Entering 'third_party/kineto' 2024-08-07T19:18:46.6183495Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2024-08-07T19:18:46.6247011Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2024-08-07T19:18:46.6312662Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2024-08-07T19:18:46.6375894Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2024-08-07T19:18:46.6440538Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2024-08-07T19:18:46.6504505Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2024-08-07T19:18:46.6569836Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2024-08-07T19:18:46.6634347Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2024-08-07T19:18:46.6700353Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2024-08-07T19:18:46.6764841Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2024-08-07T19:18:46.6831238Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2024-08-07T19:18:46.6895916Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2024-08-07T19:18:46.6961126Z Entering 'third_party/mimalloc' 2024-08-07T19:18:46.7027179Z Entering 'third_party/nccl/nccl' 2024-08-07T19:18:46.7091431Z Entering 'third_party/nlohmann' 2024-08-07T19:18:46.7157408Z Entering 'third_party/onnx' 2024-08-07T19:18:46.7241771Z Entering 'third_party/onnx/third_party/benchmark' 2024-08-07T19:18:46.7307179Z Entering 'third_party/onnx/third_party/pybind11' 2024-08-07T19:18:46.7374148Z Entering 'third_party/opentelemetry-cpp' 2024-08-07T19:18:46.7440512Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2024-08-07T19:18:46.7504278Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2024-08-07T19:18:46.7568080Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2024-08-07T19:18:46.7632801Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2024-08-07T19:18:46.7698217Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2024-08-07T19:18:46.7761327Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2024-08-07T19:18:46.7825144Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2024-08-07T19:18:46.7888000Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2024-08-07T19:18:46.7956835Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2024-08-07T19:18:46.8024427Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2024-08-07T19:18:46.8112486Z Entering 'third_party/pocketfft' 2024-08-07T19:18:46.8176601Z Entering 'third_party/protobuf' 2024-08-07T19:18:46.8245247Z Entering 'third_party/protobuf/third_party/benchmark' 2024-08-07T19:18:46.8309501Z Entering 'third_party/protobuf/third_party/googletest' 2024-08-07T19:18:46.8374431Z Entering 'third_party/psimd' 2024-08-07T19:18:46.8439374Z Entering 'third_party/pthreadpool' 2024-08-07T19:18:46.8505605Z Entering 'third_party/pybind11' 2024-08-07T19:18:46.8570044Z Entering 'third_party/python-peachpy' 2024-08-07T19:18:46.8634858Z Entering 'third_party/sleef' 2024-08-07T19:18:46.8700519Z Entering 'third_party/tensorpipe' 2024-08-07T19:18:46.8763638Z Entering 'third_party/tensorpipe/third_party/googletest' 2024-08-07T19:18:46.8828917Z Entering 'third_party/tensorpipe/third_party/libnop' 2024-08-07T19:18:46.8890813Z Entering 'third_party/tensorpipe/third_party/libuv' 2024-08-07T19:18:46.8955025Z Entering 'third_party/tensorpipe/third_party/pybind11' 2024-08-07T19:18:46.9018519Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2024-08-07T19:18:46.9105256Z [command]/usr/bin/git config --local --name-only --get-regexp http\.https\:\/\/github\.com\/\.extraheader 2024-08-07T19:18:46.9135320Z http.https://github.com/.extraheader 2024-08-07T19:18:46.9147097Z [command]/usr/bin/git config --local --unset-all http.https://github.com/.extraheader 2024-08-07T19:18:46.9193902Z [command]/usr/bin/git submodule foreach --recursive sh -c "git config --local --name-only --get-regexp 'http\.https\:\/\/github\.com\/\.extraheader' && git config --local --unset-all 'http.https://github.com/.extraheader' || :" 2024-08-07T19:18:46.9542022Z Entering 'android/libs/fbjni' 2024-08-07T19:18:46.9585525Z http.https://github.com/.extraheader 2024-08-07T19:18:46.9626571Z Entering 'third_party/FP16' 2024-08-07T19:18:46.9670757Z http.https://github.com/.extraheader 2024-08-07T19:18:46.9711021Z Entering 'third_party/FXdiv' 2024-08-07T19:18:46.9753745Z http.https://github.com/.extraheader 2024-08-07T19:18:46.9793521Z Entering 'third_party/NNPACK' 2024-08-07T19:18:46.9837811Z http.https://github.com/.extraheader 2024-08-07T19:18:46.9876749Z Entering 'third_party/VulkanMemoryAllocator' 2024-08-07T19:18:46.9921933Z http.https://github.com/.extraheader 2024-08-07T19:18:46.9961253Z Entering 'third_party/XNNPACK' 2024-08-07T19:18:47.0006383Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0064967Z Entering 'third_party/benchmark' 2024-08-07T19:18:47.0108728Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0147934Z Entering 'third_party/cpp-httplib' 2024-08-07T19:18:47.0190492Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0230952Z Entering 'third_party/cpuinfo' 2024-08-07T19:18:47.0274658Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0315221Z Entering 'third_party/cudnn_frontend' 2024-08-07T19:18:47.0358642Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0399405Z Entering 'third_party/cutlass' 2024-08-07T19:18:47.0443083Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0491935Z Entering 'third_party/eigen' 2024-08-07T19:18:47.0536101Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0578105Z Entering 'third_party/fbgemm' 2024-08-07T19:18:47.0621973Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0661430Z Entering 'third_party/fbgemm/third_party/asmjit' 2024-08-07T19:18:47.0705876Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0745417Z Entering 'third_party/fbgemm/third_party/cpuinfo' 2024-08-07T19:18:47.0788439Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0828930Z Entering 'third_party/fbgemm/third_party/cutlass' 2024-08-07T19:18:47.0871327Z http.https://github.com/.extraheader 2024-08-07T19:18:47.0919258Z Entering 'third_party/fbgemm/third_party/googletest' 2024-08-07T19:18:47.0961707Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1001393Z Entering 'third_party/fbgemm/third_party/hipify_torch' 2024-08-07T19:18:47.1044081Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1084383Z Entering 'third_party/flatbuffers' 2024-08-07T19:18:47.1128238Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1169843Z Entering 'third_party/fmt' 2024-08-07T19:18:47.1214012Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1252446Z Entering 'third_party/foxi' 2024-08-07T19:18:47.1295508Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1335586Z Entering 'third_party/gemmlowp/gemmlowp' 2024-08-07T19:18:47.1379179Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1420134Z Entering 'third_party/gloo' 2024-08-07T19:18:47.1462668Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1503949Z Entering 'third_party/googletest' 2024-08-07T19:18:47.1547158Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1586504Z Entering 'third_party/ideep' 2024-08-07T19:18:47.1630402Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1668499Z Entering 'third_party/ideep/mkl-dnn' 2024-08-07T19:18:47.1711673Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1759192Z Entering 'third_party/ittapi' 2024-08-07T19:18:47.1802759Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1841371Z Entering 'third_party/kineto' 2024-08-07T19:18:47.1885405Z http.https://github.com/.extraheader 2024-08-07T19:18:47.1926068Z Entering 'third_party/kineto/libkineto/third_party/dynolog' 2024-08-07T19:18:47.1968775Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2014061Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/DCGM' 2024-08-07T19:18:47.2056965Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2100212Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/cpr' 2024-08-07T19:18:47.2143500Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2188884Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/fmt' 2024-08-07T19:18:47.2234646Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2275130Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags' 2024-08-07T19:18:47.2320029Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2359262Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/gflags/doc' 2024-08-07T19:18:47.2403689Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2449187Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/glog' 2024-08-07T19:18:47.2493900Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2572069Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/googletest' 2024-08-07T19:18:47.2616321Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2656817Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/json' 2024-08-07T19:18:47.2701917Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2743913Z Entering 'third_party/kineto/libkineto/third_party/dynolog/third_party/pfs' 2024-08-07T19:18:47.2787535Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2830331Z Entering 'third_party/kineto/libkineto/third_party/fmt' 2024-08-07T19:18:47.2874175Z http.https://github.com/.extraheader 2024-08-07T19:18:47.2915717Z Entering 'third_party/kineto/libkineto/third_party/googletest' 2024-08-07T19:18:47.2959348Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3002860Z Entering 'third_party/mimalloc' 2024-08-07T19:18:47.3046791Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3086838Z Entering 'third_party/nccl/nccl' 2024-08-07T19:18:47.3132866Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3173528Z Entering 'third_party/nlohmann' 2024-08-07T19:18:47.3218422Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3259374Z Entering 'third_party/onnx' 2024-08-07T19:18:47.3303068Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3362811Z Entering 'third_party/onnx/third_party/benchmark' 2024-08-07T19:18:47.3408085Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3447980Z Entering 'third_party/onnx/third_party/pybind11' 2024-08-07T19:18:47.3492999Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3535827Z Entering 'third_party/opentelemetry-cpp' 2024-08-07T19:18:47.3578768Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3621911Z Entering 'third_party/opentelemetry-cpp/third_party/benchmark' 2024-08-07T19:18:47.3664619Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3705799Z Entering 'third_party/opentelemetry-cpp/third_party/googletest' 2024-08-07T19:18:47.3748566Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3788586Z Entering 'third_party/opentelemetry-cpp/third_party/ms-gsl' 2024-08-07T19:18:47.3839825Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3878921Z Entering 'third_party/opentelemetry-cpp/third_party/nlohmann-json' 2024-08-07T19:18:47.3923830Z http.https://github.com/.extraheader 2024-08-07T19:18:47.3967258Z Entering 'third_party/opentelemetry-cpp/third_party/opentelemetry-proto' 2024-08-07T19:18:47.4011210Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4051621Z Entering 'third_party/opentelemetry-cpp/third_party/opentracing-cpp' 2024-08-07T19:18:47.4098775Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4137882Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp' 2024-08-07T19:18:47.4182955Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4223494Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/civetweb' 2024-08-07T19:18:47.4267300Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4310789Z Entering 'third_party/opentelemetry-cpp/third_party/prometheus-cpp/3rdparty/googletest' 2024-08-07T19:18:47.4353639Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4396989Z Entering 'third_party/opentelemetry-cpp/tools/vcpkg' 2024-08-07T19:18:47.4440525Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4506761Z Entering 'third_party/pocketfft' 2024-08-07T19:18:47.4549665Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4589109Z Entering 'third_party/protobuf' 2024-08-07T19:18:47.4634853Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4677539Z Entering 'third_party/protobuf/third_party/benchmark' 2024-08-07T19:18:47.4722010Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4761005Z Entering 'third_party/protobuf/third_party/googletest' 2024-08-07T19:18:47.4805354Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4847166Z Entering 'third_party/psimd' 2024-08-07T19:18:47.4892234Z http.https://github.com/.extraheader 2024-08-07T19:18:47.4931753Z Entering 'third_party/pthreadpool' 2024-08-07T19:18:47.4975241Z http.https://github.com/.extraheader 2024-08-07T19:18:47.5014611Z Entering 'third_party/pybind11' 2024-08-07T19:18:47.5058037Z http.https://github.com/.extraheader 2024-08-07T19:18:47.5097742Z Entering 'third_party/python-peachpy' 2024-08-07T19:18:47.5141383Z http.https://github.com/.extraheader 2024-08-07T19:18:47.5179511Z Entering 'third_party/sleef' 2024-08-07T19:18:47.5222934Z http.https://github.com/.extraheader 2024-08-07T19:18:47.5262960Z Entering 'third_party/tensorpipe' 2024-08-07T19:18:47.5308160Z http.https://github.com/.extraheader 2024-08-07T19:18:47.5347546Z Entering 'third_party/tensorpipe/third_party/googletest' 2024-08-07T19:18:47.5390824Z http.https://github.com/.extraheader 2024-08-07T19:18:47.5430553Z Entering 'third_party/tensorpipe/third_party/libnop' 2024-08-07T19:18:47.5472845Z http.https://github.com/.extraheader 2024-08-07T19:18:47.5511986Z Entering 'third_party/tensorpipe/third_party/libuv' 2024-08-07T19:18:47.5553984Z http.https://github.com/.extraheader 2024-08-07T19:18:47.5593407Z Entering 'third_party/tensorpipe/third_party/pybind11' 2024-08-07T19:18:47.5637127Z http.https://github.com/.extraheader 2024-08-07T19:18:47.5675494Z Entering 'third_party/tensorpipe/third_party/pybind11/tools/clang' 2024-08-07T19:18:47.5719268Z http.https://github.com/.extraheader 2024-08-07T19:18:47.5864815Z A job completed hook has been configured by the self-hosted runner administrator 2024-08-07T19:18:47.5898456Z ##[group]Run '/home/ec2-user/runner-scripts/after_job.sh' 2024-08-07T19:18:47.5904551Z shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0} 2024-08-07T19:18:47.5905029Z ##[endgroup] 2024-08-07T19:18:55.9530472Z Cleaning up orphan processes